var/home/core/zuul-output/0000755000175000017500000000000015145342344014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145356076015505 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000361723315145355742020300 0ustar corecoreەikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB ?c "mv?_eGbuuțx{w7ݭ7֫t% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN#ap6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIkmjX(1\;/"ɩet5(I83[ lT Jl([["+}Ԣ7d{txܖKB1ۜ3bXR9ɬ?b-s^m& g.g>C`eUMylZ9Т-) J uQiP$Ƞy# k8YK͜@y od@hma]Sͦ 02b=븃bdvc. R <CF5]2yJ!,R51%۳ 9d35d*T29:-LyPMy:8!si]$cIh)Y 8ݟ-ײTuQ}F{P_O5/&J0tAR+x70^ՑwB #iE9L"u(4 zޣ$R:H7 >l@DDXw:=]Kt8f?_qѱv(3kW\ c傄F}>K+|BE>^E^aMRBȼR#cйZwN8k2kk*T^}ܪ#Ã]`| ÚBd =R6Z9 9`ʎ+yg&_;"1yrIǫt1ug̑&2kSɀ١'߹'̕ \Ȼtf=pOfO_ܛ_\ʝCU98XO-Ǐs},yL2+^7gWw+69eƁThgoXkDDkξgjԥ 9CKMβOQ&p#HU*5sTq;w=ͮ~aara=Ӟu: r<PњD!aUF"@h[= 3h&H;X ̩ :G:=JA#y7I.xk*= %XK>=H~ NĪ= T:ĄB6̫, Ae.3I`LYfmJD Χ5]n2#xSK'C}$58/Q_7*xpAe9ߓlL"K>S"@ƪ. Uq2)b Qek1zpqۈ;J9g8/>WPUYDP` ze:g =k{d**۷ bAԬρ*dXr6 ³LyZmڣC1_)ldX[Id4lZw\2Q̃ZB) ,kT6dG8Ⳕuxd i9"2JP`6h㳴ȝ.Y9ӟ>~xs48gA+Xeވ@[_xid32&ภZz=R2wË- z"U^_}۫5#` 6F^s2cuOEB\owBjʑb&k:ûq*V}f bm,T';{/ұ:/jr;<`/1tRkL7%cCڣȀ5Wc0x⛘<$ޝPh $Ϻ1D q:$yۨ guxr;suoG%=OI&޽P$"|w5*w5RrIB=u* ]hMmg/m;>QG3PԌ%dpv~2wt}>~>ZVX,*ZZfx brˍ`y,$\=62 \bGw߫o;jk{̼lƙP @%P{wJ7<+75$"o,p.[X׮D5vͬMuxO3IYGh=>#{O%$cŚRGyTlx oJR Ev"zvAEݸeoNN~[hiAA{3(dQ8޽oN:>^z!X_*YG?0S@١"㍚DњxZ5VG}Â~X_~ cѲU_6hkBCqbǼ9n-?2__%7{)sH+"iRvX-].`}7ܩNaU6Ո`߫{#fÚp8}cwuS'6w/l|/}s;ۢ~}q02S?3õ`"Ve_x4`T*t"*LwbWT*Zּc,FhxA=Nl <^_Xp:Y J']xǍR H{Z`N5blp7ny>*=lRX<06gP%UڛC{CF6ݜp֍ Bsv5^ܜ0њ AR[ ]_5~9-^  0;º,bߐu{$ d-2Жo VNFeƦ(L"+WeY}-6ȱ 92D_xWhG&Vw_qtgW#lUu@YpA:MfK|/O|8OG-' |ѱITc a&I~ӿ^4mfqtyAfrIXiߒOļAt!/"S}nӾNq$wZH qF4XhLйIA6M&ex>˟SNVVV@\kž液ڃ+sM'u]R`ݕNGf;+CU >xWaG!j%'n)r:s mzRګ,ܱ&Sj^T?՟x98;(Z*)hAEOvF !5RYt+ (pf[d{2R_}}-\Ƕ:NOb{>mT2ijz<)a}, bN?j"X0lٺ#Ё)5EjޫD72J>˯g1>Q/'|PJKE*FĸcbeLbIA(EK +AXMOLE7տFB(9U$e uWpVdvrR"!-Mfrr1kb{(! }2Ϥ(vb@3׿CVQMl ܓ|kQ| W}^nn& lθ!$5/%H$0(SīeLy8Ǭ E[-S*|oãUrOt*_By}Cj#kZ"DUOYowU>Ie I! s%͆J-]R%T:FMק1&\rH-2Dz䑖 62.5l؍W:U•u/4٬.~HV'Kߴ1#Y/~/ Q Y rJQxlxYhm``Dѩ-rc0&蛤%QrTqєh уL.ͮl` v$kn^.Ů[_f> y}jePK%)AFzL Kc1* o4Nj2qQO>~r$1 pLޙ(m4}!^@@$޼`'SO>LT%n;yc!>N>/O?1fl{\3ȼ.)”W >Oȶb> 8yӪΚkdp| G{;4^l{PȱdA ՀCxcSMTf3J1 R(*ȴl.^Q('@l7=iKitB L+BT҂=ll魳cf[$ F!c%؍:ٕTzqKh! bK@* *#QӃ\sAގXpljl ^,wf2AU9c>ˇX؛fqT\X):UiM2r>2{q>OڪRs )B: ۊZl{6LNHY g<cmN:85Qt0E_fNTU*K&+5q0mS换ohN\Uz3b ZFNM~@6 g Sl58*:gSaOAc J~v9ƟE q&gnN}JTWQ)vBی:D1c+ *m0W4Q>@>lW"A X5Gݴv]pEO>NOI[ 1,j2C:+34ำqA*\qb'YpuHT)|UkC.r`˱Lϰ{ xr)~l-ɩܿ*7DNXtiK E%V Ng2ޞR d5ȧtOt/ߊN| fu"Χ@#=U'_y)g9| {b`w ·B:M#_k%+1L#BR*FJ ߏ3j׾,NNǽ"] rT [087?1a@P5B,ݖc}jc઱ 2hw{5}sR~  ` Q}llK`;b6a>@'@—>9VAze"l |dv;)=l$-M`>3X: P/.%d1ؑHmm){W\hxtD78*>lX cU8U{$)mܛ^|8zVOz7~~);qUt,r̙BDݿ-xpyV}d5܊ܘmBՔȫvPk}0ciuvS-~zߊ K1PHO9pM.~9gs`k愨Cplm@+@'E*覽L|X̀`Z>m?" 1a; W;" Sݰ窜̀'i™:HG,?_C5tBYpm_ʆck&_I_Su>o>&lrA%ElR&Ϻ<>^==?~7xU?>>z7OY{Q?\"߬Vս+q&[ڲ&ZA"@+9Z'u*tI8%pO·-y>Lh!^*,o_V6Z;4AQt0ʹF5KQ 3l૎s1@:G|'1O$ +Jp[q@8)la.~Lr 6$E|<ޭ$൘ G7qi#ނеt%sx]}ŹrD[^F\`'Be-~ZW8*HqOȸVH5:sXӄc)?W`EN*|\v aVT0"#tvd y &;]P1w]<Y1X?m&yO/Ddv .z{I>|!4Ăw2j.ԡD6Xb]e)ߛA 2J1SGpw>ٕ(WAѱ nb;pV ~WO+18UB6VNO8Vttᐜ*ѝ8A_:ϋ);KX!$$]1YINJ2]:Қi >] چ.lL t1]Cr޴kӔ[R*)X$Ic>EK#e 15ۑ_dNQVݸVC6{җzEU"L 09mBu-ӎhe1u CWL;oG^R X5)aRߦ[_Vs?" Iڽ]A12JQig7ȱHoD:EUپOY>Wqå} .[c&SX( Qz~j@-E} m"8_hץ|r]^#}8B*Am.SEh:YnG\~3:&58*: gS!効<9ViCb!s1ctӒNc_:SE Mn޷WQb0M^IVic$"FhQ|![NIK q~,Jc%+8h&4II36V 8Vbv"ŏݙmn&ZX[ckwRpA;d+w!e rr[݊/V+@;NRy2ЯdS#v!{ @dlG]#>aJP\vc"Kjt-1_$A$Dfj=أ^o%<"HȯTgOرBӆI t[ 5)l>Mdc5u= A@kU#cJp րj6M>dUN/a\F,M1:Y6`ƀ$56%}N'OKx%'#t+MBJp `tɪez8|hUmi')XٛqUhhCN+JL֩aQçݚMvư{'j(f)}_vq@nuD@/ҁx*.L!Uc3F KwtOOM-™JAM'C/زR9f?q>hӈiJeV"&+ED>M-@7aV]XYK}pc?H_?CU"꫒? `a7N6ux٘d|,/669ʀ,EʿkڍfK54N!jp.Pٱ ҋ9W@/iIaZf% >%1׭vM!:ɋCBc6ɮu-" ѫQow#z)C^0\l[Ohl"য*6 ny!袰{!./EpԻ]В|Dž8Hi7.cZ0ϕ!1Q%ƹJ4^3O!5Z~vˌJ`.3Oaz.TMk9gk-T4G7! ^O7E"`W28" Pbn7?ϕ!l>g/EU:غA>?=CֻP8B\pup6_3XqCGXz=DH9:o pcMו8`n{<sz1t?_a2=8j0rO"کɸHd"yc PJ:<+AOZ]y6 ö/%D<=-кQ^oAnv-{)xǺ--pcl:WLg-Ӂ#vXǧc~f+ Q;eqp5;OU!F>j\FW+[3= !YWX vZ}|>\T%9dp-9Un# zLd%~gA%EL{rU÷㺽~%uY(nE|sUz<=6U[FTߋF7}qw/kQ]ͭy.D=_{U\?<^}O;ߟ"Th $$8V'Z2 &+Bs=8'kP=4hf 1h4i R8UXi7f ;VoiBi~MefZ7V >:/?/Ac 1M/ tsH ǙB?}Kf@rK-͗\H+ϟYSUגT0]'OSju&[V z[{X6"X#I/rʑ߽|{6Jdq̌qd &ٔxL9")YvlbL;.v}UKw//E]~(D1f2-p<&D9Yx*ælur$F 24I=Jb2w?R"cU\4K@ XmRcUƻ|Xՠ&0~ƪJwDYI%Jrl!*J%\HxB UASV溹3ug<*jDƪ07%U$x{QU)q84ZxZcU밼){!Ǫ 7͜7ЊױLM;ͱ@ƪ. H^]Ub|.X?/E0]xdz͘|Y=N!DxyOH\ OzKT^U@tJʰ񔔭))[armeF1#t^_+c/B=4m^WƦ+ܗŕ +csuF eqeچbxY\z ?޳6fF1| P17W,@n3uo<+dE+dYhښ5M2<*dBHEjERM ^ T ٤ >1A.>b'PXa{}e?UG"`N΅,꺨^n$jV겚O %oIUʛGI,Ibjc鳥&ېR]ZԔ{XBW~Yfh`6ުd\O_tG(Rdr b%䓸/q']uw~>MMкJ\4]4%$spP0JL''y!Aϣ@^O:B;Kx<g=.m|omO+ӹ| SJ>28AFHyȠ=Kx++9NT`{ϝl.bOy4MI<zAW|ynQV U2_էN.ﺫJ$]|>?P7ݥq*}$z*8+%.Ȱwy$0q|~Kx`0:򿖲)s^IP,cg(ήknILG \u7v;X&:=_u\oO<;ۼ]ǩl Ok^~89!now;z|vM*7zqt3 ;~y|4 wuG !5|ɼsUG@o{Jޝ"|ۜSD{mϋ( RpXpikH\*^[R B9X6EA;H8k#]s6eR 'shbíb.iۻk>6!Y6=h~KNlJ!͍ݕ(M3 50^[9MA&AB6c lA9Zz8dcc 7,v_0  SSɦG;WG[p!??]Dw;zD< UHOj: c!s<0y&Uض P.%`:?4|+=$*ٔ !Nu,S]Ng0""}WoMy] d3А;GQ)\T|(!NuᱰHb|< F khNΪ_d9)[da w6w k4OW {0]xKBZOK:W᪜C[5P{I9z$yxm%[D5ycI:E =H7M&Jë2(nEXiѲnui!^\$KCu]D-/E]`!Aӊ&0vZ􊆄15YA4zxp^vø<ͧ&~g$ZQdQXlw4M~Ic+t$֒=}x0+J*.Pw:+ uD4(*ۄD~;#Dn2~FC*v|q2#R-jÃeLߊh OFw!Ui[e5v`m~d)~Өm~l"m4Z܆VgcL֦ȵ({ mIL8"kj~4 (RezC-J.ڏ[4@րp\Ưu l]|. v_OiF)߿o9,,J3@Z[@K|?mL" BwQ<'r 3e) Y^IԿh0xc q"6&*"a_M8/]˛2T5Э1o{e>!p-" Fy5Z$!TvaAi }!>َ<]筇4zV;z!Be -غ@=?eQs|\MY Xq]w@$eېD5=D̛>$ҷE{5,GX U.1tm-z۲j:IFߖnZ]Jzə)O};<{ b3*^Ia0ۂ5M7sYTctBczFQ81O>4aUQ+^h([n| a-a4?ZWn7"GͅZG]/0m!J\ Qg-y N\j.xRѾZTa%L9 *`^tG <mx%q<ڇ<Ai1B]!Yk8! #5S{K^_a-uŢm;h:oLO<2`"[toCjdۅLl"aK©CW5m=ߦiT-:ʇZ&p1i1aX=^$ELw- V`6 ̖y4j_ˠiTtk`6b_5%jo=XMiFݯ SfaAMԺ"*g\g B^0oi43TI6Tm:5ULk*x{el6biv Qa7< 2a_4;tkKGnE!͌-xu=<ےA ")KMFG$ok@Y3hUiGǶ/A4fX3fpgn}xS\`Dpw32u_aRbNᳩxS9S-m{Wx.^2{=jV`˯W}-xz|>"E(i26TdOY]t"t--vC̴*Ċ 4-kaQ9K&}xwoZZ)_yɰЖT8+kӴ-߷㘻GeǾ3#g1w u=+ L3t]y4Vݵ7A/ 1۰![k78j7+!m*g뉋߶TB,e-z TCDtnB*oY m]AãCӛHK۞"Oq9DUxp!&:ƶt7U<IDg۠tKѳ-u)[1U6Z4L :JN*;mY@t3q8C!]8ե"1LoO;8xmnŭ:LtAj$ž?08S<:[T}sb 6ժ&i=#YpH֕Wr Ǒ\э. Ru&LfeHsr^Ӓ4C(K3iT֒\SWXf gŰGkݏp- ou34>P|>MQMXUg@D3pseC>(SpD0|A} @|o XZW MBފ5/TS`sXw)=8=X @wmP,7 aՠCEkC@kAh>p]ä̠*4@7nAB>uwuCp4Ixk0# 8\:m9x,IHxN] N~æL,Q8KF(Jroydo牐6nEP8Xܢ/o5e3rx^MADX]I #O$*7sհ\g00꾇Od=wWjSS vM~y&oHqD\ :;ys.w-̍HX[S7t[홸2{H*rQ$e3#:R\߯ )ʲY^6& hPݸz(BZ7b?X R /*ݙݦc{s6 #7<"W B |.£+AQ@նr98-g0+P2&:Rk}az!ʍPvLJ{JyߞU|T6*Jme[ Prɧa鑅4 C5Ǡ"`Z MGx4. f%il3׵{:=$@,?:ǣPH "5)T&k >W78@FA=|Ƈ݋;$)?vg ?@]7o5!bzheo2W#_< Sn5h3޶ T}c { ޙtwݽ_ t3P J!-vxy] %3J8dA6`tǃp"!ȃ8BxqԢKB0Zcλ)8 3J͛(*19o܎ 1+ErJQ),SVl@0nr<;nUh`s 0L~@. sA X.Ӟzj$&[{ 5u뒣 y%qA pv7"Bev(F7RӉpv! qh;dPWa[zL! NEխ=d,9J`qػJ>Qyo )br0Q8A`i3 fx -D :z'T R0v@P Yh7#`Ԡ]: )D[=X'g;!Aa A!sw9 `w =\En[W(!kM[;29]rh ޷`5KSri+YՕ$fSf+$TCڍcav$œ 63!mGa1~c?yꊅ <+fG%ȿU9saBc=q?Ǚ9 ~m庍V&Zw4Mo\]sYLmzLu][E3WaZ86ogʩm{T|oͺ.g*ѳΥ3Oa7n=Sq<<^q*Hc *Nr*ÕSsK[ Q԰?S܉ZVGԶ;u jN] p`Fl%Z3ju+3)܈b g*b2˙v$C3R?gvs9S8⪥;СIykUh5ɻuyg]0J2~U r8>wxg0 3i"IAtt?l)fP;z0\s`gpe8tჭG8hypP( K$ qA`$[dI4Am .֕0q"TcП_d$9لh a< ?TJR{x똆lzFg.&9uk*3HOz?Aauyè%`L H Ϡā(/"v7OFF/rQjA;/vEqA*X:27o=jsY +13g©eۧ-pz1#\Rd . D#\#PE<ЪCY_кA/g vF-Zbvx#a_7Ag6At(wB>K. Er›uvW "St$ 48|,`ue>92f@wz3ٴo/~s [X_V_||DfnQsKE>jV3?է#k`UGxޡz9l{N-##Shq⾶up\j4,苕.}KLI$v0N0RVV<x ͦg:ȭ4#'NԱ CniG\;t،\p484c?֣Co#}dۆn1stkhb/6ُ. '_Dc;x3[)h.ЊSPpLEt%ͶM i[@u/3e`0]s(c-QƢ1ö].\вP{2pO .EЋ-55Hcxɞ&&;06p9[3J6uGT{O-S>Ytl9阮bX~!~Xaj q6ꆹlߍ"H w","k6EQpTEah[ \\ؤU4 Eg Òr?`aӳKgK0FkJnHbO#sݔݦ:t<@}z5m? }C?.y6`+0WuJ.e#7O /`[Yە)ȧ6 xs{S-HXqv 8DrA JMIW[~M([&m@(ۜP아M}ҥl>eTev+ۨ_vF=ݶo_ݹ._o@,ߜV 5 56 ؜Pc;BWjl@L愚jPsBeB 6'ڎP땄Zj/jo@9vگ$ހPgPgB u#y%˄nN+ u7 [&ۀPosB^I Ķq>He _a$H5Mln! tJq5ad/]yp ~DWҡ]0ϒ 0t䢍SƠh?ͳu@C{zE?8'NyQ<ɓ t45N .<0ݢ_m8}Tu9^ ,!ڱH1Ct09:"{oEt:G.U0F:pi.26dpF0CDeū44}kS'IE/-bq?2L JH Mw=YH H~_VRgUrt£ X7'i|r̓|?1qVp e$[Y"yɭ ێv<H<`Nfrap:8h/ē8:>WEx74G@ʆKfq5?e^-e*~&H/x.(<ahz. JCy0ɀ%id"@#wTl@2m@"lX^ A[e7.w4`"BOcNJ7"T?S!+T)F4zUShODb=YBzÊrb<ʢ$*>%$I=M<P@ј3i(tH9 aQu . Yx!_aG5#SœHFӧȥ͌b|îU5apиn.Rh ȔFE[] \Pb8 cDŚ%P*EyJTnLRXWShO9U.-^ H(|zozVeazfpa>[A,~9ȁ? .VxK%LUyJ_b;t+/ZHYNiA]버 t|5H 2$}`uia:~ՇNj:=t/K{?[ ~]>q}?2h~+g=8lj=r"o2W0.;˛%Ts+Bv홿ʹx yӓ<>:QT47;-3zL쨂/_64f=ׁzm 6~7/784XkD"q>E_?Vcʿ{}z`zYV͇f~&U~Ikuj;q,O]L-=^q?Pe.%Bc"%&3/I{M`0\rV&kMpd1iJĈaೳqXA`ibɓq8ݸ|ӕeb$ G |036[ VDSKHƊ8~[;f|E%ƂZZfQfJL{E(hąƈaǢI TbQ(bF!i$PN"s9X&Syb)&x_b FHX]"Ąk1ڂQ́m5BI_GoH`@jRd Јj΢`$b=8r *)*b)dZIkm1+tqxzVQOV46Pu>&4ķp*K%礱n/F'K(U4lʙ[[>iex\‰2csmNh+Q1FL7}3NQjprYh}+db: :@LeW3i1)]&7(@4JԜ-1*Ws$N.XUU_RdtUc-H"BcĐ+C)51\7ZeZ+v1 >w_c'V:.\/skixO^:<nQU;JEX.ݔuzPꡗIl _ Z13x|ZgLh߯ ZZwm6|̻ÞW$)C?[B "* Gph "ELü-ʊRLHYeFI0?h{NeEs}ϸM̺kZ!{T\^ooؑt1riXv_ 7)!!Fk(?jQEP DתE^g2&xchփWS6?SG\-*8s 1ķG)mJ/e4kG{5C]s%ddY JVIEupDOtBF0tqWN 6?oV,6q[h om8JJIXoH})#o"P#.~z1T(S]h)1XcAKBl%K!?;w#sOPS5y9Aip<,AWrtÿ4M|ڧȼJ˝QȆy1G@nnܫ$N|iUå 9x'&:BUw67͗Ȑ7p| Yi/ʝK$8(2S]X1N`cl-a# ~ސ{i*(STy,4 bM|T< TI\< j0:W> :eE∐t.Xx?.-g%)-quiTw6b^"$!{Ǽ_^o;!FS6\xyiR.nO!1E溏ءɱ3aQ!L6 I/ڃo" % 23ڰ<a/ IՄgytDW▙T!Zq1bYF'h ͳcoHjX5CC-|tGT8L7]b:̉|iY_Kg0<<ծ |4 3XA =֦@2q乚QCzӳNVCM?1j$ԘF·eDRr,͙e0nΑ:T9 #;;*BTn*bY sE!pjKMHߌG,:Lw.`$ԗ,}- Gfɏ}KCxy9 "Hyk77OO$8. ~ЂR@pEhZ$b̬q<S50A ڴy<=V0 kKڭuB/?~|U4!?GѴs66ę"áJ$CEs6-|/x #΄Wo 6AZ><x֪Pj rFeqy(N{-Vx:B]c%JZI! Fe?~%QE#TʹC@I~^W J:xr QGR>K@a&bsM "&9-DLnfdnv)Ţ056-ﵹ &փy.7a%H#Y4ɱZ{J!j;Ḇ_$in{pQ\WGPDWE.,yr@`(w\ s4wݵW&C`$]x7pvIpcY?z\҃~VSLWlȪBÔ<1bS~0ˋո$ZvGILvm ޞ48eڹ1oqᅤͻ3fǢ_HpCG>z(bspLe;bĹrQKXCt[?>?8cvktz0ǭQ3v-BeXpAfv1l2 3>:ue]"9E >}1śsݱZ &kmO9כ3Ф2UɻeXT%QUKגgva/!9oHB:#3 #+nŲHg#$xמoHƑc^0kBxR8m|W7X9f G_"/D0O#H]Mwxtvj⤀.𱮍P`ALYr噅gĕ ػƍdWMv nv`du`4c"%x﷚,JGi{>m.:U]$HЗc MٻbᥜRf8QwʙwHx6\ۤd59,3RkOHb’pKBYMY`0{{L=tuU?NuԶl=h|M=;3=ҕr\t4t75zt3O8d&jT޵bXP/%(gw@40U @!1C ax~2n/A#I:~)JL&ޜ9~<x2SJ悠dtQV+?p"Hqy=|&0MNoG]3k56"i+jGGwMŲI6EAY| Be6}7 }|mzsq4XjFݫZGakomaB$5?/3 AkQ3eX~y~=?ђ,xDeS#MFv48サĚH1ܹDD-4_,Z:s&@wp (]*W0]r F=}3{:{:džo0OVvdy_|c|Z U ЎϿ#4W?G~`D;:ň*zPv?8!V=ved=F6ٌ)> W$,~4@KR\;TPV *#8٘5 _0y"g&DQbe`? 6tf9v^J_m]GΕȍK䪅2kL$ˡ”SA737W}Rj5,L9QklTf(Pq bl 2BRW1yi|V)YF~=[Ik$Q|Y:8D ?p>O3K+7JK:*^uvգ}S{3hmjpWq lm8hx+U~Ny5+xEak#G6Cྚa fdE/*9ȗʜa~h ܤ~3aU2[-T$- 3$nY2ͬfRgzA{AItc_xdfc7 sޙYoy Q:\NH"̈rin!o`0%=xS+$ l]!uue*ĤCZCMXiuMic*MY)REYUvPMPwh6h; y"l [8]s?^a}TH~Aum`.:)8_i6pQf]rm\w-R_dx;*хhR |ްe~gm^")S,RI($c-$=N < kIh !/Ѕ5CXVcԥƤxNֹ5/XAkM4&#'t02JYm'xBkuCHyɛ#nϊ~J-G9FyrʢB ^B@$؊-Evt{ OXnLHNC >K7_vOQQެ4!E_2L=-{_7|rjQORX[) J w7VH]ֵnrNIuت TXg!g)E̛a0XC{]{+gF^Y!(P̽@q8\P!\ɚE0H}%ԉXaB&%: }Fi]돏n[;2س%7*Lv2H*n>8X;gtDa[>yy[QUu] B Ꞃ#cp{ͦj-MU 1^>n2;xd1^> y[v }-FX뢳/݉d}k#S<̼D]$p2́ph֚#ˣwkjmvvǹkD$CH2:PD V9~ e3 X/m #XS|j]oȒe1ʳ|7_.]up%X4-=q 2dj4l24k#BW+Mn${\k+ϼl!Mh(aU#R ޮv[^b B '׈r|K}/P#72Dv8=f4PAñ5N?/u^Oƾ>`-\ 4>טi#Ajgs$^l,CPۍ!xd8;/:e.EVT*z0Xd%Sy5x!5/~4Z+;x&:> },h7I)r[Tv +_@\UW-)Q4}-/P F %(ټGI,jwtx9Ršv%<<)"qݮ ps$4xWt.}߆?ߍ}E->4yOx_wn. Vr06 "8Kb{)a "$֊$ Ŝ'L%p">)ȳџmQNvT9M!n Ń`uRYQ qO Fy4cR]37;.Mb h&bb,3! sL(kkEx-.smT3vH3v@35.] o56!9AF@@bQi,c'dlM;jM;o~ckƺ]]3,S:OGd=&FRcCƉSN.\3vH3vtxufd,Dx=b\ SkAq5F[OAcsI7֌5Ҍ?Y30Uԏ%IvT+v5'Gg5٭3p]F>><q"/ʿKR?p|)y>![/^@AzN9B_Vn!}R)H@* @z>_ya:84`縕m>BF¶x11m{Jr{;q%;n{;i5c{ |gƒ*ɹܛ J,`Yٷ`BiȔ{6Lm~C"*GI)6OǃǴyGڤ>!@(X7X4o|v~`tky_HEcᐐ9b>`_ע.h2' ?iY 9 D rBXx> olFQ7naFu?>'}`t"Lk<GUh}}W~ZhQV P|3 > M@g /c \Zt] MFr6KIn_Ӕb#$hwm 溅MloZE֭mUmZՌ/k^ T"Ka6g h']|SeȧkGjϰL/)4W07[xD U--\R :}\_5lop3 Ƥ¬ Py3exnKvs/ų~(?ŐftlȻr#1'qijŸEaIB~ޒDH/ɛ`QǧH,oK19km"g pJc֌ o)8.JZUHĶ]Zm)A`R9B@C%*1`Qt )Q;mbE?e=\.Y>Ac@\:ckܖm ii! L;K X:㝍%JU-PN&Vr PjS\Dr YIG$̱U!! 3 >aX^$1e?7@@v+k4Q)iL,"0jP!pB Y[+HvDaa>^$RQlS@x ?  Y)f`rnUud4F.:@q @`;Dqݡ8 Z[[ʹhqV;e4 ӢL024KOJSAL Ӱ{/u _cPl%[.iλd@`!B u@iSJPB)vC`x7uJB}#u3%w,Ծ%wp{ᡥVUe&=%gcBc.aw®08c$,O(=AԳX3%H1@c Oz}g+4%Om%GyR#׼n%bbff`XD4Z IVPp̳p_Hm# ]&FC )8-d@`cۂ3SBsJMY%.8LJ$N {AcdiCIpPt~R$pJ+(cz(#.Z-b0'!h$pU`feZI,#DTsSpwX pYD̀HwQM;ŵA`;@#N0znU;(&6I xuF#Q@C3X3=1 Ilc/q"eטt pyׂw HdQZJgqFW?epBqf ;'$ Hnd:)J-ݒeͼd2/Uźb'^X3 ->y0d8Z$x Iajz]31p-T@;Bz1p=3K_>1#8l9CO2KJ=**j11Y i4>tH>.|,0#s!!N ՜OWk""X[JjB$׆|[GSPR!O#I\|϶RV R#^Fh] J(_] Hi nU/)}+^[f{ulX҃`4\Rni k m 3n89B`T F1rgQs/y(Q 3 y3RE|Fc]JIOB_s)])G:}6`rFNE`~=uY6,/i %\^X]ָ˂Sc*Mcu8`Tyl!`өLGZO)Q=Ͻ^n|*؈˱U\}YeWړ$kD\A/w4 I(n^n/k;qz ?}ҝ܂5ŝ6Q>dHH(ʰXik7WJSS?" *Q!( {F!"!3'HS,sx,2#ۑ\4V<_9%$eX)2A abB%4>9ޜHL.X,[\y1Z+s*Z_`儈fb( :p]L{Ka¦fF/}5 IAИXA؂T&ur `IN8#J↗z Kh{e0삘60-K55o.S}X2_RLobxs>>h O]Vome%qCLh8-7ȆJ0wx0K-,peo+?ףÛX!}%c8VHcBa+T >խ6`"'׷Y=unn*Lusgc6jc<eJE_ۂ膕tksw8*.ב: %~9 db]gU5֊%U~ !2kq^L'0&T9m$ g'~p4D#9;ouu7|c(@rwhx+ebGvGR,m`՘*k&D'wU؉KKpO|p=jUb_ʬ2B|,%(m]gP1Dwxkag#aRY-oCD>!LA)NJf0_Xąv azf_|U$H(6B^v a:{UJsi}(X0wiʈR1uR:c,l5߇b5Ya\C'"֔j;Xɷ@MJvؘD#EY_4FJӌ ̆ f W>5BYQoXwvx˷DR[Wpȹ~~wo!pe7%h~8~x~u=n!IX3i<AkB~y?nNfsHVRٶ98wvGX Nf|;Uy:qCJe9~(xu|?K83NDw5zGXjl4aX)'ctE838*:Li~< `ohpϸ4 -2Qݫ  g `o $L}~$ʞF!V-n^|X`c:2os"vKOfsP]MYg6yy_`,|Ĥh*| =^.2VwC׎c>s6m Q']utjr-qqqgzOF?t0}}ÿkW9|-&.afWdovہ[ g<ֽ2 fMyVk $^n/j]`~V-ؗ}zI1j%_ZP * wtm1B>|:oqޭ ,8T^gY\' 5!?cbPmX>ZN7%AfVT*E|p&~}2Z]xq]S; qJK7o`6cNx?b/;.y$˧7E`>*(~B%Ey0ȫGp5`5tJQ (9玣n*'ox_₩/Ә26 Y ^㉸~$tʝʔq<܁[G10׻<ىOO˧]$ZXg iF6LoI>  /_+H~SiC jqأ/ͨ,_U'dٮ]) gyV|W|^4Y֗Xɬ }Sm-#,ކP`}J8pQq,Y F#4_ȇ+x%KS@gأx9)Sr+:q<4n(.j|أpZؔ< #X53D16.aĦulv'>{8D]_/dw87>+d[)%(!AH BASi« ?]UW~S%A/L>Y88񸭉 mp \&|wXmE48)x~'X™·nK?xU h|~ɳ虌/ ˪Hףhv HUE7 ؇ 1ݦ+zι2(qmV&r5b6Q)O;'?R#Up`UCoC(]ZNz?NF.B<(BMrݭ8B,;,uٌkTp CIo 7p|4RPd#̞#q|rxIHF4 ͍@iD* $m~56kޕR v~AQ6'Tx882(#MLK =}I&kdnּ+,3$[|; sԟ@*F8)2!n )3J0EMg\HAk+ҬyDmE>rרn<*?48Ɖ'H`mOEZZ.sHTh2Tm5GTEʵEќ"oQA!.(G)$Tf2cуїdnһFfͻYxt"f鬄f."nD&p 'ÌIp%tYS(>dv1.TNrC}|tsvG2]Yb9_79jH1A)V8F\:LFvv̱$y1j$oּ+ɹ|u]$muȜP 4SnPA&ʦV{O5z0Z]?{C]7iޕJY]73o*pDhL9TtI&jtmּ#]9G]Ѽ| `6r?g3q-ojȿqZ3lHpv 9w`r8 YvҤS= lԼ#Q9Mx\B[Ǩd̦WgRh 22`d )'-K,dݛ5Jw~Dfn"(N:C"c8Ј\ةTKtQYxԬꪵp@hDh9ʃh<x?˨̭ d#監a:͌?\XXvdD2^H1ʣX،($(o!+[-]/rRQ6`hev*O缓yA+U7n䬟9//RUHH+[YDgaᥢgEQx_\?,e#+7HW1oÞ2Qb%Ң=K!kZ,d1rQ~y-ڱNr R #RL4mP\1ޡsk59jE^n4X!^yq2H:!8LfY/w:ӷ`29_ t 8|9koxt85Ob~yRq)!c7JQ*]>~u8|>c04|{w Vʁ$tgh/<Fx ρ5ëc:BEۈ Ցry(Il/4Fh N过m 5EIш՜mzĭREebUQUb[PN7Ux?;0fܮ5H~,W>|Fb@ūz&`uGϞ}D8dUe=EvZY<_|E"uΚ*bB s':LSxO8A-/ Z[D b1AJM,8rX]nA TJF(j|׮3=`bh[ TqMf()+ A#+x+ !ŵ?./<#,c9b,3I`rx]Mፐķ 1BgTƍ.F Mvx_~*Cި2$9W"S k.=SE4x_2,JTJ.ZQH !W l|4 v{ס!*qtU3ҕ,\22YrBzZF|hep#8e,hr1¬˻_n%fSln?v-mRYFCZrJM^ fx?g/MqOZ]Os7/qq^ʹՎOIoޥ@Sfm'}7̥k֔ ileD'%_|ZW5j^_ Ӿe{\O7ve̷Foר³zغ`$;Spk U3cbHE*.;G]J %Xe>̧i%?_>L8;eA*kT[o]of|' ¿637ټ~d2LL깰M]{Ed}=g[piړ}֦o[pnDۄNWyd~z?? ;3mAY5H><ޟ˫6Tafϸc=1fRnF>|Cmj *4&p\;F[WS6mflɼ Odf6X\@b<n]}p *Kȸ< >^6Zu4d574 %sqV@ f kN'%7@zi /}9b]їC'%A#"L!,z13G;-]ЗއmM3sV`VG;辌Y]x_G[D, w;gO)A.Sf)#^'g ύ A2*LKd`41/Jh4M{Or 6E\g(|yj~7TC.T7{SsⱋP 9tM |;XBRŠ5-8uU!kc2f(AQmmJ=Ǚ֜]EOjji|mR"K*gbE)bح)kVXf:”q2ɋP/{1Q.Ƌ;t~$hdCoͣb+Yo֬TْcKዎ)D6l9PHn i@}> ba{s뷊/ѾV BS Pm"Q ֲ#FmYBˌ ^^,ޓ̦& ٣\6-nŪC(,A#2ID>A8$;~,:$dMSϰ缛]6Pc#Ό{YB@ & ; zfxŧq͗el&R`qX8Ԝتr# 'oU4m?{ y0(ʑFaCY͟, g(+; ,#8޻URK±qq$`]e 7:GG*,F׻Z1~6y,?Փ&Js @~Y/ _$)܎ˠ2a|'8O5゚ilfK~mpJwIc#-pJZ{"$27a 1U!^[QRuSiܻRZ%V+[#pR:5#H%9_lдqI9]݋ B8%KRd#I,ZL~+~֯򚂄,?(P( u,aWH b4mj`mN <'ծ!Y8=8-u>नZǗ #@WEgcq){A4MGx(G&g[m_| Ԏ_!8%ȋO1N(d&)_9Fh?8,-GupΡqc|g8wK*UIQQT1%4Jl OK+gP/@ZeF[SnlE aD8+**YU ˔h&e9hN֢jpUc(Wi$b&0Q"HY QM tZ:) S;Mx ZxB!_0/.M~]#KTA)ilt+ o_<.b?,AЇg_3(,~b5/ =zcƖ1[.gLbc5Va+kr$7Ov(}l#,O>U9&9$Fx,V,BDf"-.wetz`_TsN -ΖY}t.*`旝t[Y ౗⋏Gxq.մHr6& RzNE)vp/))Puumܭ& H9CsʯHar^*#Fn\⋯Y'B|:*H9V&AkZSghO!:TAKV&'<Ԓ|yҜ>=_Mbs(7>pI"El a*)g1虺7 ^њzmDܜs <§05uHx c0dvHu`_f4)YL _[fKB|1.4'9E&@-Wpل60>B|Q+Y<_ (U7\roxPoR]Lx*PD|kfC@&rZ[{iW{V#*4Xn+ %jz÷[oo<@ǮC,cYnBqEeBcyDXlQDhŋ_E-M$+W>ͽQ@|˕dI[PU #嫧3t8M&[V$롋$2lWbSoybpzzy6{s@={dSgInSeNt{TUܯVuY-&?Gz~z֠v9RmXlm\hHxڹ+d ^D9e,b˕{dYĥbkexG4-R_:^U OFS0EMTtDDw8oomQYa0<|kH` *RK{ƇBq'::-!lW֯#Gb4ʭ7E>OGg?,h%C>m #=, ߆&15}z{ñŘ_౅2FryWȕ9u9'[!c4 wܨd7#10o4iQd-ADtET}mC Ӕ 7gE[))3[!c4 wdіGu0a (q;A)A]n$@eW 6ЅwEWUe ;+ .NYl Fp3HFC͚͍"@ppeuBBj9)@n|FۄFo !v8+Ȓ ]Wyl`J IsȧMD8*/B!#x4M3h$Gzw dFCDa*:EזB EABk #Jg?t %fKI6GĐG=$QnFۡ$QQs-;"xomq\1``4c/*8ɽdc}1QOtDuɟu4P>r<ݞȗ#xZ8C?Ak۟OȻ>^kl]Ө)He ӽ. Q,Xw w yW202$TZo@k̽ @!c@ZtǢY$˃ ;obD4Ig$wyW\Hhl=d]\»0 |Bg/ 2Qlu=*DDx);-I}PSoWobUO!Hq]W941Uƴ:ƢO=- Ο],1 S5=kB<؃wVrc^}.9c"UU ЋC.Mҽ_D@O=n>;s'Tz!QDs:0Vkc]0f0+2_SXL,XШ*Z奶6Un_[!c4 w*e*߷}~qxqU]So@v= }O{;?eK}7-rc*1rE%B. uNoxc4@]!^{Z/h$!Pb/(T-H4dc" " "zF& Y'>$$>~ OE8!!yWȍN|GJ+r4a>> X_ÇRg%,uv Tg9o'nGORoxm16yGsF~ r;$cv_[ܘF% {xՠ\)3VhC ,sK-qǀ&PJgQ\Dh25:M@H,agkA>lA k>[c_TcuiU ʱc 9d0򎐓[B7kbWl SHD㸙mA,q e Lsf5Հ.agݻ:-UG9{o06Rq7ROg!Gd7Z!ۥX>-g7dԣ+w|Cx^])7Or5._@;$mnYwqTҹ Ox4KlP! (9-T4nq:Z3tZ[=th&ܜܰmAoգ2_Ïfb6qq߹kk\K"ׇ֒ _'!ddUP[TH."&}|wBTA2U,BMLJ&6I}3O qUG[`4^S7¡qfmGCp~ 6r0+L-ʭ܂n{m:=xԧȰ\ V~Dxϡ!Jmk~ WA,RE2ˉ]>AjSq*fhhq*)k/;|Y`RcO:۾`;H;['T4I+UdBӎH)0bV0U~ bclp%\ <$ }8CM t59vZ%i6/˧6' 7WʰR%qV֮Ӽx=]t݈4]T}/Q_:C N8m|:|B޶AҢGȃ*Q.FZ*q)$h$t`pPqxJ(}ciZDW]'98^Un[709 w?*/}Vғap%|[~]Qx$tg6.Ve?-(Vl]Sqht8tb(n|Wf*bvq8剓tĕH s%L;$:& V]廓E},*kʗ~]bE$nUUjwGZp8"ТESq$!VhUXC,z" bt}o\3'[1 rGbp".[ J< Vx)q G DxGq`cAMB]'} Na#I ZNG_N2-nFԑh2.5sn2H Ĥ=,\rphN " cL#u[wHc'wQMbzni].9▯kNJ$ ׬ z BcQP>Z+xZZ#_bd}Q՟1!*mGcč11vw_Z+re PcdܺB I`wz4iݧyܞeϥOO ~ kOVkcƥ]>[O>1*sB[2kE⎡W(ˀ20\ O!Jahi1=D(rtyL@ |oq O n 0).: \<s<+5ƥ*Y cӃ: wA $xbn"ТEtx(Dv ]CAWF=x+ ?Z˄re'uLPwIbzHSaY0rIJgD.H !Т,QaP <҂COtX/EڨV!,qz'nH啜̞ƙ nXEbӣЁ0 =xkς?׽< ֢K*qViTRΘc3wp|*Zx T)rxe"h ]]0(pJ8Xgޛf ;uq!R]Cy&Ə䙂6Rb6N RSnqJT}8*hiu&dd)Ҟ&A6oCx$?(2Y9PtjQp'ƥ'lY.&cӿzѺl\>R ]ziԆ!\l0GAS"4Q?NIͬ)d|jQ2 h  R SVYMpEy$̦;:p@EQH? <҂{FzIAjwc* e j+dGdS638@} $!#W(p$܃]+~:iBA3cY{Ǒ+FpFHh vz4H%vl-'ݽ~EzĖJbSFvAbX%XUdU~xOd>ߘP3o $x&ΉatajnUqjzWluo'd Pq=lR7jӳ 221ͭ 8"A8I'*MS32a( xg*hfj\*}~g;`[b04 PFϯN;'={j,  a 0V +2F$%8 L?){z&}=d]!FnCv!upaȈ$?[,~C.\Ah:!~*TՁV!2t p}o˥\ \2ωe#$W[+֬J]|;Z+cvv2+W fy( A *%G9%4` CPB=,G}u `>*0>e)Kd&Lsz)KtJX$ #V5CZb;bE(/= 'C; -xr.*E#Lj˘=YrhYX_Kce"c YTD@)1|<1@GpU%"`J L4 uO<ب}|>r*E@S*ZUF rS&gU1 跠[0HhD{TWx{S#,,mTu?:0MP؝x -K5j&FmP`vm.]o]TVnu`!Z1FT&Rs1n+|uxKj gD♤$!.Ug˲d^vIJӜl]%-ZsV3jEs$`ew:fIRk3K2Z],j(%<Ǡ0I"]D- oIxNbv#M ]I0)}˦$c KHPǢZp6MZlf܆s(AM۠G ~ʶ >{bڣQզ˜/:eε?*[AnQ~AP F*:RydsNbۺ<ė護C|m`z+2Fz)gvn"&ܹ՘M̒LcmqÁg$E%*OT]=$2g]-|߂*ϊ1' WC [DfmIT|?XD(u4 .1$b_<LFS=wy>fj?(LK @c͋aFlO0xv{w3C]~fZțY kG{ ndz-Mn/ޔ-g Y | 'cE?1Ȍj)i1=)uĸm Zԇռ񩫄3*.)+jz<-3$S2`KlW`h˭R2 UEn\kc2Vօ.6o+Sɇtl(mKT]ﮀ4/"i4 Jȅʢ$+[] .OW駲ݺ'-?Cgt>](?of^vwfNbbd<ӽO {ȓ^O,|q=<&t7t~wo28U_*7םeкȪpl] +lWI?ie[R\mh 4|۵fWd7ټ̳W_ U:zb8۬GkmfTv=zu̷fY^ꚹ%޵Sk~kfW!?̈́|tUwﲿLV\ϼU/2edk dz6ٝl{V}]]v/Zʧ-?vkcQ!n'?jb!T\* _lMk?|Rds[/s/LV.&SM2;˲ߖɊ|CE'^d::JILTb^ .^E/?j46s }6oNC^7Ůh*0_YD‡e}oK렷f^?c7 efDO9c=O?t#>ъ18 To(ꗝx++D~ϑNbh3=7E;/&w~G0ߨ, mIj;,_Sn*AҜ}+޾@͊\ `\~|_TV\]qqh9AVXE&1*iBMƩNFsxOetг\jkW^Oޭ'K+&m蠝 3b1wI˄$T?,by;ѭն{ՉV+'ΆH;16,= m\9gJ#w}|oON,9 u૎!2`{<FJ6ŋĦtYx'o3|7H4TQ)Yul" ~4 W |yr߱0hZN)P?7YJp>Jb3t$6Q™_ ORAeڲ`tҒ> SJ_O楆rv'QHXH斠, /M#m|"Iy=GW,g:s>Pu ` 1 SX\XC]k ;` #X* 0c\70tzK*EN/V*x3yo%j&Wt#h*e&`a0Il^`r 'Wןp|^ͼdb6>wy_|S1L>]Nn$giar* )Ro'ݟIPB2kP4q.D<ϳߘŹ,_SڦZ(&dĹuNH}ΐشL @LNuOYrd ڄ9;@d"xJ.U6\3Q_5C~HXH" 5ƩI6X'VIdFLfhFDj,h'@y4=ĩXƁJc˸qfʸRKXR> ErbO,,'#縰 X`=*)T%%3eq ;pYj'+Q$H s: HUIELKfG$lm@YAT!TXi@~4O x7ƒJkkY' T.WU0yvQ2ߍbj IO_͠3lE.rhYl#%eRR':W1 q: ֣"$ٯcI % ԝ}'8f!$BLtyMpcƻCG: K| :x~2Ƹۖs{ŘrO")pۙs$Z'SK;̌˳DFrhi-Wۺ$v}f-o*~SA_ ~7y6[- 9d;Tlͳ祧Y>d> %RQBCugIx̢g>GƱ4J8Q {4l6 |pg?3L=ǻgvJr`/ 7u<`[&Fl9mN:dٖmʪs8e!d,^WCkgBJ3E8(bBe q@Tƽ%o-Cge_ť]nX\,??[?zy|xe^':>|\V]^Mݷ6݇x_oVxQ>q2 ڟhP+ڇ;ʃk۽βYQ3|U͟now忶ex z^P>qqOz`dz{V/q7|s N a5C z-}7! eoN-1*QF'@聼j?ΦcZ|>`۬=XpuZO6 ꇴ~Oi= g"3Iǹ5pxBُ֚n{# ]|*G2| ArtՀ8XPh )qGR ?*ix SdJf[KcV6ˋė]g+NqGKnpka7Osr~AT7ϗx}җvg(z5C964AR~<P >%wRxTJ4CW^3N ITlG UH>Z)n,+^j Xj_uH)M1R[ֵB`V3 "4oَ(zGRKv VTA :R :Ki!e{f^dKy z0vuG>I3͜SFmSE9!?vh/KUFhYֿBԨʓ@FRcȏC聼|fUnjNVsB9; QŨ0 r wt1ic.BiFá1χp4F 1b/hZ۞N4Apo;%wEz}X-|NI?H8VE 9Oh[fwRK5ZL\I84!SMz0ym_z&I{2x("{'2GBA$~jsY>O p_IXSҲ&e) J'\Q B1R7 hw3*Cn-x{4.#6YH'|02bcVUT;k,^Xc"+g? g½ph4ki6bcʙEYpGhҸtL *Hw5٥[+uDٕkheMm9 .bQڠY斦Ų.XϏh0mbq[T+6NM:bü7~xo8"px˲z$L*W'ı]qi]}] Tu}L טgrPYӻ+Vh&*1 YF(:<µQ+"hi'{'@聼bwi3 @N4?(HB䵴ߞu-꠸2Q` ^&6=7ZC{;_Ƹ ."ro-}&.p_ &SS^-}n'NY<N{|(v%@\+HS tb&@oxzV5am/= z ϕmXDʔs_sFzOG~V>7"<1X)z sYRu&{EV>}Z /v \~|Y1Rz`8l~ݻ뫳unqٞ7r#ŀ[tգUto ` tDGVBՠa# w,zUkN~_h+%fQ6s4 2 Ω<Ɉ-#Mo)z 8ܩR@VtU6 c({<1,~mX3h 5_5pBNdMҌ18kUY#{'@聼w]#w>): ߞ/,BgV4YX 8k2r\OEXmY7Y>Pi 9TK(d뻟98!ŭҤ-e[Y9J B]7mB8 G=0a'Oeq; XH-x?^EE^|Eۥ_l)нBd;"ڛ;') &<􏻅b{HYUtɟY;ls|I# #&{{y+o}@5ӆY=2͡?_kXKP  q y$|Fy-T'"YHjP"c Yڤ|P# tAvƅxniJc@RɅPM|)z *wȜz|/={R;e=wh$1fVCGE gU0<B`;)j* JI@[^Tul+B-ǤA R:3hƘX+@ .ȫcsF%CD%! !v5Rꐤ%uz_7UUN%G[6Ul+ķ2Yqcp/' iXW ,>O<&quϑ!;̩^wB-gn9W1 %K:7b 0B>b-֫YȁP 1AYLTn$QR)R-:C}z$Dy%Do+eB`T6 b66s( o|jbG@EQ3ΣآAUuJوS @^l_jLy<&-@ W. eɬ@聼w:g/ __ $]8^:Bmkxf`=\<ıںW;rwNSnimi V7_/kV#TQF,ZAr-F]PA#0FCs9ys"rT7?ARo?YUzlxUzMWW׋f/XxN_O96b] ͕UZ7U,.~-4(WY/n&Gi0Qm\-(.Vr/`u-/c]]\wB YӪP_(4gu@~t;%I?P+^^M^Qi`e*蚑'Zi(29ƒcby*eX35"1* 11q|qy9 1-qVϛR17wrEiݰ t$6ɰfRV79l&Pޕ"= S¨%]Htrl%,g`> 젴PHb'},/(ŽtlVTt=hE,2Y! GDi3 bhUrk^'8S6iwAe851nrnTLjn|L'e/ 11qz]Hfsb=s;r>)z`QhBA;%i-A2St@Ej JQxX.zU9GSo$7gNMJםILJљD2 +|m3 T%b<2G$.\\1V7S@ J m21yL˧Cxq8xkH_§9L xޑ-8y)4v57)3|J m7-ն{547Rxo^PcP TTcZ )4;1Y8! i=&&N{ y=3BsE{)6rLZK-: amP֟ CmW_oVEH sӃsb;Ӑ+%Mwj 01m ]+d:uF^"l^6ɞ3]6ڋG,sk%̠pRXS}@̃a'٠mާJy bAxܙF} BI]cbLfXgҁ=&&V'yK!ʜukf_("s?LxڡdϤQ]'c}IG$@'F)-+A P XDI2ԙsJ೽ى%Qcf'֊I.Q V(ECTp,>19v^c"'Ymݡ19f52s]cH[LLioQ2mj8f:f7C]9"D󿹑 'u7G#w D؂;10I@yY"$.(K5J{~nh&sPnrMMҁuLl󵁨 }8$׸>;֞8=G .maV|6"&I}mL]ZdV~_ktO.[)˺_Tou4[\}b}m3q~`σDyL+Xk{Ġ<hmuꐠMetT9=&*kLz .wcҙ 3 G;R&$8E4fhqf9"dL*EV6Sn4gjtqP8\_"0=< y~wj TgX 3 0KӬ=t:SOb{lE.t+utvN27g!텰2l$x"QD䅼)Q>LMa6`P1\-6Y3q0̊zhRqJJyPV9t6gc2ٟ[dtk=GtNΆ 6gkp# }897M>v|=uྦྷ[m= cn)I>L;x<+f];<*K1/lL9Z.y}ꦙX|^\ tXk[3e ua-vV>Kсub8 J2O(QcjEx\tF(׫ze| (;@5,qzj %Zj %ZjY?f0W/Mƀ,pˤ0eBp1΢- N3DH,ZIϨC"r+ۏx2GG.Vҵk| otp QBwgg*ҘPP AJ,aL$l #-jwJ5ָu3mbcCC;i8vǁ!_奢@vl (ŵOtclD@2@@?TzOo~&%Ao_ҽr|\ୠ iϱT"X ϴuHKʫXiR8 ir$BQ4S GN}ٟɒXMjxT{رzTk@pyg:8iYd:6Ve~TTl&gi,i-o^|ެ'LlVD]v4t|P.%q}# -km#,kzX bVKzbUԕݬjv9tIkyM3CrO>-uKSD`9&|@hje mt^*m}14G2l@ 4ċ.ndsw8Fnl>޺E^kY=rʼnd.ۢҡ7nVpx┤NꌲM,Ws!ЗWR)t$k2;khfi_[ 4w!8 ID*1,HŸUR`,@"A9Qɝ(< ;O\7z4?U?a #6PIΩ^*BшQa(9%W$"seO _|ٴ5jkj#HOtpAH&%D; 0sΙx 81WJQ>93Ah4Z"l}VytMV#㮞ZLnμh0{u_?g 4ҨdQ#EL, _iϹN'gٷaIvXġM~V1f^,9ɷ-eq޽\PCgWלޣ[?P?6PPRQ2yw sq 7c&*S$PR3u/L-d</59 ՎI b.Иc+)'{eJ>0bE4"AE PZ(*a [*D04kIs$1"l8w#FT&s-^Kbf4wJbn@Tj"7%#@0f֞';MՁXCEHrHI)g  $H R▉ƽ }ݐ&AWHc FmN1)a{x$"hN X9{UrG=NCՄ(uZ k(Xrc# &p|p4HJ0C:pG%Nb]K?H:BV:0u Zbcm?FJn1szTFh)Fq:MH߿巚FWU7h;8,c7ǽ?~{sW|jb"G!\`*[ECozeMaKPGHC"r4AG"}#] 3#?T#v`mF{ Dn.ƌ/d^$rUpaIy@^Yқr)+(_od|!p6,lE7&8WJ˩M 3;?=sU)F2ӽޥ¯Ώ.fg `rNZ\ppzVіkLUxŬ[wG/B =A_=1nHc7ԙq>#7 ,]\98nͽt{I64W%g^څĎ9㩍M_7l+lT[́'~[D`rXN޼zw'zǓ޼;DW'~fFwY$y6ntë[w0ko5MײY ޢ_rC7G=f^xD׃:F\}E z ¾\\-1?/ޠy娾_bX,Ge]HW/ tUkl2&붉Mi%RZV"Hi%RZV"Hi%}>`DRZVⱝbўΤk,)BJ+$dʫ* )BNyR^W!UH* OZGqNiʫ* )Bʫ* )BʫH!Z~1$cL 2+%+4O@yr<<6ZS)1'Ra5^0QH; mIkGеSڣZӋQvgEY:SF4!͈'gD\ U_ƽB^GQ6G_|5Rdo?+o]N~2;͇WYg_qi#.7HvǢT=$SZʌ1ijxb(3Ҙ^PL}VF%?=y[N3Ih4Ѷ&%mGc^̀3zN3x(ܷ!G@f65;򞺫wB%t#HXz3Tա`2O36f!9y ?S R2JɘV9N2Y˜H`yFZ~C敀hfKV5f? 'f%o mB ހP{cXbjL\fkJtcD|򊻄ZH(4觋< ٻ6Wv }JJ]e%HzSDQI!iIgq b ,HL=3O<#L~mai?Ӟ!_ϒB鳞TnQ#5OZ_CkX ϮX+FI[b;`^rÜ%Q [ZcI"!tvD_FOr 8M(rnv Ȣ`;X+\$*D4fv(JTmέ.nR-cPGB%;/00F܀DP1pГtpP1s Pl*c'8vҥYk5NSJve^A3oKA#.CT a%x4 oWϠVT:ƽJ+{$N2 l{G+aI* ےiT3$Nj? A/ǫ]rq+\v/W .'n1"3AcX~vM'򆃷'\$ Joߜ@HjIT@&)ANi:L^aܶSfMly.:YѤ?6#})m% ⵎcMWW<\Q % izkdnAO(?>EٷadJ~F49Ua/#[]ݟel >V"8N3bZq.5- j{3ՆD7h gNT*'ZV#&Ed! *X)GM-d[Y4 ņJRt\N^ՌNŻJf g˯eo}rzR~,FUa\"h+Raq9Zڽ)#x Eg\ן"\T̳Q&ɏ'o>_x{5+ ODcTԜ a9ai10h" a16i!d0+$QMVeڄ׬LCy"BDeR34HB F# ~:2%FB*`9bJO"EA$j!rLP}HnByN09(xTdK-"ϊ!oY۝#H.G+7UX(Q1v"DD1c%&r"8I1ʺVsSrE؄ A2 c*(T `&a8zG< Ziv@|EY͹Hw:"%12SfEJb,"C\TiqwJ@q`U%2VD̏xBFP\=HbR6hlH'|کAyt_KSCh14 "`BQ Gi*U6{7CNĺhZp^# ۽Dz}L$q0)5J`r?EO4]}wN2%~ݙWpZ-wS5ś__>} ,H Duu^~ƳػbCts@ V80Es:RI]nj?wМNrl(85 !Y5ŘEsg[Ͻz4( man!__`c\\j*ިSaAkC?xwU˿5\k%]߮>Us07͑>.צVÿM\=x~50llHO\OA?LeKR+b0T3Ppa!&1e .UCV#Zͪa09MA\eIVnumnjy]:N0T1pRW. IFwp1 kJuKt`ptً_/o~~g<{}L95Ah:{ոjY[UC}bMVmz&f{|4_1Ks%2 J>=K_φi7qv <&~h6ł͊ZViUbcȻ4^#򆠪mQy#RRnbB1xjXxoSC>`XuDx[YFi^}{9颬ܚ/1@65I([5`:ƋvOZ빼;陜+KLia;+sR@ؠ XI3E@E&ʴ+3շ55XQcn. ۬73; be{#ly)akJ-YBN9B5$1"ɑrV`w-e a0'r[2fE S.5Jj#2+CL}Pm&2eLqLⰒ4k;E98hKцHMxa cl97 #$鐆6^Ԗi ܸ*op.`9VNF^r- @JF/t: 03{@ʁ;~@W1F: lp6HF 9^sXe3:t (j7 J,fqnރ٭%\ 2 {n_J3G<x?Gջ\*A[;IPJ4sKm*CL )3ŃdsYɋ+mJ;g[JGI2# rRpF$hG`#ZM9ytۜZ;WUU‹Α+xd`v}`~^N}8/ڋ0 [0[ ^ =@limw1˓,[Wŷ#89tlq,f T(Gx@hMfOїl X(DxP% _"+r灃8Bym?ij8 {E`Wcމ HPn Kp3"'){))ᒥs|`9 Q Ƒ85gwqP\/L}T0*2T brN+o|q߱1Jv{@‚ ϓLd-[lBVrb`l \=7Up[L˓}bPQo]{Ǒ*c p  /@O.I\EU ]r9KᕖUuuf1DGEZyi@¬dgcKnӢ$JђdKk,)?Lgi8`2uAcpR1G Em:ӑaYqGkZ%J!$;Yt^eGVJ΍Ds;[:^}O{8wG{OGU~aq) cqkbC?d~WߕVT VOL)542%p)!Рe`}e<"l5,jZ󠛏*4 $LΜQmfӽ '(& OK^jQ kT:aëaǟ-j[߸xk1vܜڱpnj:NCTc~]1%?nׯ^W׊" WsV큚+N(>E3-^l')¯D\< (Lv%Nhqg%yEƏ"Ű6lj^.Z*  w6 iqVU)v.Pj[ &hxH׍&Li{%8]Ip0⚰MPJ _dfMwp(t驒0mET8R}^]06$#p}/&TPޞw$MQ7[ 1t$[dsN']-}\P6,N7Y$CY%LjۺWKfrKWiՂuF߂mK}^_[;B֧6ɂld(Vްރo%+!T ]8oMz%.µL8]s05tZbӉvZdZ?zw;E(^i}xK P&Qbe uipcTd11BB !{26{ nj^~vk .Uϓ$dCh%!]KBRlVGޮn7+6+Xn'(RR;r?v? E<VUHX!j|W&Q?C/1.12>I|Y7 # $"u47co'WÙq"@B}Qꅁ?dwz;¤PwDX׈ѬeMGU5U5LIg+濎Sn [vSn|OGU*Ԇ7pjY4oz}|1!tPCc{], ljE7A_@ϯZyq .u_tYY)d^>VvU ?hmڌz2 #K[ %vhg^n -(Y0d6X;qs`G6zKaCe£xJ pC",>hJBkIN!V&PMy X5 C }ϐ"=mR l(촂Zi4s`7ߥ^MGBK6bLR1Rv,$ lߥ>➉gE|ɴ0j*J.\3A".-Bzls1U1X+|1]F')AG)D#TW`LPy V?rn<ʭi8G[Z+AEV @mrBj؛†)y}o#Ԭ-֫u6]c-#mdk˥r7 +@A@cD%;A1c+` "NRa/_y¶wSkg'[rڭ PK)QTy(-5>!xWzqҧ葀oVZR7ԫu;"E Rr*̰4N+? B' `fPw[y ‡0nNhj|څ쏠mYՎR*$u3,~mь=䯡h򋤧ƗApa/FgA]qŐ,tX!k=hA X |ٗ| n(\85!Y Řp{OE,ޭlj7gU_×bjMzZEgÓRkCټ _tl:irJH.@O|Q]Y¨y(hj.NjO ~\uci]T`8>|ܶDx2~s,&,_0}m=olF"6a}ӹkY_]Cˮb]V}RLx~ -^:2fGnr+]˱5\|ޝ)ZkLN.6*5MTQiVWFk1NVv|΂Q'\m>:r.~4D^`chOަ舢:'p6,9tINS' PLո.偝N6Xn}u I"-2Lr'`@!:k)GB Ml  2+f,L>Pgwkv:zK@ZhF;7UFlߣ֩ `7l `wjR:(o*{Svz,dgvk0QҾ5,>s*[]& X +JNjʹT\a1槾TIX 2F'~ig)O|SD sbp,%cVR46") ?숦BTOk,_E$5).t&V\A̱SmRnlBdX71 CT5D/Tj4v'k+(`h0)&8 6ݠoi^M'g# |'d-)pZN3@9dG(o"Xhw9ʽ*LLj9+j$>=V{UG)-./s VrꟃVakU \t 3֬lh3mew/PvKrr\2>%s\2>r@Oc|z{=u*bߓ~=z C!fH Y@SJx->c@w4F]u=3HE k%>%W ‡EXJ@LRJPJ߭LiL )&KZ߱OG]8Ǎ JyRF6eFQ伥# 2KIB F#z >Ĺ@f?(~p;v my*2%(;2{2T:b?{p8]v82Pl-#'8vVo=>BKjSyGj+ی iq.:Һ('?-?~B,*dySOdesVvCݳJ( /uQGG  X;%w"8C'X?~~;%T8 yaνpRVnŜ(aHnކ {lj~ڌ28_gi\!\ {4'=<= ϓLd-[lBVrR>:x Yx{=qCM=Gv\ɛ]/K|{o$(.f;'.6Ρ9t1.fHuRBHC3>Csb]̡OBQ]̡Csb]HH H]`+rb]̡9t1.Cs]7x Oszl9vNk9vNGRG 44!E^ zYSjt)F FVĆ>_mMy9]z~c|iO#'7}e;iTDS[^wBtc&JXD=v( %Z`PB0#F^}vHZ鄒1&G+T4hiB$ Lc,-g!)] ;ӳϮ٥=!C&C20-QB@~@eovg#[-N̤oU]!bsOԁ -؈ ȁ"LIgIeYr{ܲ~HLp9;3jd[0R:pQ$X,5LeZiIix4hdU JBAFX1THD"# FT8G"W BJ ۀѴg9m8嬒ߟU:pw!K]D@, *0\TgJ{8_iKb$M־0<' ̇?VcdHʱ2I"6ixٵ[4*^x̉wtV6Ucn`,V֍cXQ~,j7(8H;NYt`*ZItGV{P9NqRyGϫ_XqOR]6w|g?;3bhJUObZ58 g4?j!]_$V 4.[_D&mjM[kռv.վmƻwލEj8`Q^ GXkE5wJlg00AArTW ==kMDTm6o?lRp\EDauAKϢGZb1V>bA(,!zK0Vd- W;pځ$49Vg]n~RV[V57ӵ5JkA6Gj6.ZRTz(H&G*6 1zɢ)UBI-HDZ2mi?HO\^R{Sl(7(6!:aRB5Q\)` D, cusa%@!`BL1jvRʡpDXpy6:o;u9?T3j7G-rKp0”( d[j Q$*Ro3.PTz1A+H:|P4C0HwD@4:&k 3âҌ;$!Ƃ"AyWOs@aЛh08-e=zDG6PE p%rRN+  >pT^ ]vM嚚( Besj=a%KBSAG%,*UɠdELE' :fHYi7fBQ  hŠ+'qYJQY>FظWfſeJ쥽h oM[S* yE~U7C+!|Vk bCY\Y F r_Sp`@JLtƨs.x,:f: vpՙ^t zK(][ TE-6`MgtHVkCw1f|]wK}<2> ^eU16C{b|kCl0FwApˡMo'ރw+F|[\=*UʏՅ痳ցN^v|[Ws$*/ 3ChKK֖Ŷff$[YW> G}Lh8ZLtf?cpoo[] r[*l4B: >M\z4X$\wL=JuCěaa}p{o{yǗW?{;LԻ{s8 ̂Kofobwu@Ё[MpK5fm5 -5iZ97h6Tl-]n@-~|_.Mtv~҆kBDO+/+mP*RN=nW+Ͱ\]ʄ(6mLvd;>6"p_=[Neډj&'|Aq ({S={ EG}A8l$=taaN𵛅E8_XLia;YK9ib  op:Q1cQhgN+:'UۚL<}vZNUKpU_:NRXS*RyKMX2̉%ܖYQե&BiBmDZ vE^U!ZW_Jj\7i+nu|9[a58{S} OM]^>\k?ÑNQ&VT[dН Uc(E PǓZʼMVxjHkzc_x8V,If='MՒ̈́'j6L Dn:x%w&xa!?K/d+;ybJ=}aQ\q-!A4nKbK,("꾂5R;& Mq-m܈Pv<-F5w'_|1hdbkUnic] .v~ vm*J| -Z[.j&Ѭt⸺Þ}looJ6 T:O=ՁkyXoLR:m9ŭ h͖MW[}+%J&(:lE׺Ung/%[XNo-i-C&8<dIz+DP-8{wƌ|vr7yWmR.H ¼c^qV5QUL,IHXxZѬaX}Jh&0kRF-} RS6C45CoGpÙa*_@BSs0Vg c@k=*0>sB/o-t]j.L5940*CP{/16/prQTU &^ܷvӯGUq泪z+sQZWZnӹ uW(~i\.(8eye7ynL+eWD=O{OʕY4!̰ȫ*7-_l#~rn h(ȴ ?{Ƒea\U!bq%Ơc²%%'`ۤ(Jf؊ ,ا>:_$523)gdxvD FOY%փCIrON."~.l~b~T[a:ǚK.{~Vw o_x2z !9EW OP-gbzr8__6ۭ[N=[lkp>Jv@ ~ҍe}?_?TuSqAݺ2nW`L4.|6㳀Y8\Pxg]p+pR4rO%OAIXZ_)8 a{JG?xE.˟.gx;ڹ6`>﬩9y]fg=cOp6|:4 5lKA꽥`]wmO ^gp;&| ɣ_x<7L>=$ﶂz'dzb+v4|j*vFV=Ә[;ٓ _{kځ$W c+;@q jj֎gs/vj^vNpHG Z#`Ey\6X?2 aVr-4o;3/GܲJ!1A?-B 5*ϑHߪJ4$m ڒHU&2[Ӹ2Nyj-%J%c3ԺJ(5IR I%eGbi=@q[j͘\"8NLdi(Ix1i=z[ BΌml lֆRVF:C)Fa]q_;--a:huU_9,pe2pa4G TwkR!TT2B->劍W5ZE,ٚ,J< nYu0bA]\g1׫,1K L[!X+O< tȶ~:Ojm{ ٫tsb9c$✨H%9ʂZdJ9m7J?eZ.\U6<"'qơғE `j*x$2L䴢yU|p}RV g0V;ѴqT/>"$/e$`yx3(nCf6 ⥛s2YЩ ֏DE} 3j`dY.YHlb|.imHrDq%RV ]"j31i&8bwo9)-Pk8(A/k L!dsC<ZMvnPD;)rNBtP%L(H56PAS'+ ,A7mbp4 *O1 XWTC(MjUU;Zi7H^. U :&CE5FHҩj:usaƳuSE EH*} zJ19əY MB35iPcM B&TT'V !WT $LL'kQ?!\SN%(|ƛ) @j+aU9A+@(\JVy˚ J ΀|" =ʨ&pJnÄ Gk|T0 '=V JOQA,1T*`'BU '`Im62!V˥>0D8l甑p6ی"I!CJ,:)`J@B&nrT#LjBwxR A78rGAOYܴX̜ "H5ki !bnf\^>?>j!Tf njZ+1&]zXv l6^|`O:كAq GbdNjXtRS+]I1ʮ:tRה@A Е@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] ԕ@] *h0Caۇߋѷ/^<9ZO{$·)yݙWNVÚWc:5CyGx>P_벬z O^3i^OIGmx<͇@v<򫚗0^]ݲo2VЎSGU1`OFqp=)tܑ]u3,:wfNvABiw'}޻z/9l ]fq_J&\~ZVcO roϛGc yof]:aViT:u~dэ9<;a\#}͍tgS[{Yj+:AXjg Wnx|P:Y)KO&n7 B@|0;ߑ"ý0W|6Ym Np}MeH*(Su.ٯo>% -**E.?[~7?=Yz!vsKF-nkx6k#o!}xuu7#,/_;'7׃Z;~e<{r3lOng=;|99^Qɯ돻oo, {q?~[ ӂAzj/jT6o\wqTrHuv }'1: §oxʉؾa ͉P)09:ql7nAugO&AEfQ7NZ",*`4*Q֔Dzu&" w@gsm,ɌC aܙwmhF0./K>dz"o&x-!O\|=uWyR_IqE7_uztk&SHD>LM2Zbr*)|B=w2ϝ,y%y0MϞRtYR[DS%& 7/ȏֲ|+wX_),Eȇ^wf  03,)*E}eYfmU|$#3D6*gMd^5(gUrW f>bTx|YD*!^#:J1Sk#uPr'gVkkV%hR>}-?ڨ7اm% ֍coj XP1X#o %c$XiU`"zTRS찓d'M~} `2z%2Ѥ NmiϭW^eWQtdKF$l8hvqז7; T4&;`glM v6&;`glM v6&;`glM v6&;`glM v6&;`glM v6&;`gl}=SkS rЍ3`3`{WyKv@-Pr`%U烒 :}\Qr%qܧ!\D~]h@ 7*;crns9wv2fJNGrkn}{^~vn#|aa30zε DPD2B0ly0`h0ÉRHVZ,@YM 4EzX|~JY">,˓/BBt( *FI euL8(~Z8+ ;vBJ < ^*C~]_k3I$GG؄-p8B%Q:&8L]O+F,`EA"[ekK\WwpEeO[uw>E [Yg#!ZVCj/E:3$cd!Ud!L裋1&eMx;UylS 52X[>f¢:1r#[ɀ5H>z2?.0CH?f}]m"ղ}7k^ ]^G9V\"(­a]>W( pN7S5*:{2Yz ՃWsbDLכsx[-Uӹqe^art7܎~Zғ' {b|qO˺!K)X,u۫'8u;.}+^ Zxz]6ւK&> XFuƦJBYeo-Fćv~v؎nz?ܼ &꿯nNo`¡pv&P;]q ]S6Z69фo/MyIGSm;9r+:ۮ{.m>~c7Vqϟuv8#T  ̎IxFHHEc'7;X8gǠmiV0 U2my'X/h#xVp Zc(GB EN4`J,^@)HP 4_ImB;g<7@J f `x t?Kﴶum&  sq 79cF\!& фO;if41y:UdPB"zbYxuD(85J ^KpJB}`CUlܠ9땾yuD^{smÒ,f@! , c"He=WANHR2IB vKGWxrhpQ@b׶ U( nY!`Y>;ˎ~CR\<Ӕx=@iaKw6<7#tTTg>8&T0CL2vB26u ,i~G@ۚx;;p;(%QwEi-44wӀ3%\"!s5 %:~-*mҡiCbw^V~m:4!_a@a$"XyvVxpl#֦SvONYX8%h-vrU4ԭ\߾_7v"yM`/O-_yIO 'k}<*O6'* ~,p#AĪ1}X"SpgcD'Q-<,jir#X/a˹_48f.hXd~FR*TNʘԵ0r+Q-8B&?OS:9~ \qBdOMA.ZZlt޶6#ن2⼅|.vz mm1-=nz`"6[%}p!ݛٵ.pnۖlbcqXA0˔+80x3\ r+%+4Of.y5r_׋uKއ1-<6ZS)1'Ra5`~F)'tYfNq cyi }}LE7'&jzi P;Rx"_<>n6q#Y0>ߝϛ]|u{K"0zv塧fU#&>w5&˄f zxg'Gi29drKS!) q؄.EX\i)sƙ侠$]*(9G٢ЁTuXRPR C 9ak47>~SbE^\"/ 9HcҎ3_Fg\[>E*T|˨id/(R̭^r*?a)PR3t.ҋƊd}N4țWr'ZjK9 ̃>{&8!fu:X.L8=e4A=CJZ< Lͨj24-`gf~gwi7i)# t[6Fgsu6οH(ݷguw_R{?BgSaԵReJuebwXݍ6iUt&+:F;{]M9)b !18ћ*aivt'ւ^:U.?s6\uvT9>}nsOx̞>ˑYmcaa0N\ 1+="G@njG߈BrGʉD^fU;`n}L|׎.ګ1F Z[ >+ƴʳv;w=OKRpqFG0,<#8չZNT2WxSBQ]xA Cڿڟz~ݻ7Y 0_3GCUoUz'{łlƵ:`F/:WJc~`Gpg?#M ~] s;X;Q8ކ⫊Di8#̸(!_Af|'zBw([}a] z f@f쩗h4jƕ_'դǧa◜^"">zm6 W 8aF990K. +foo?"I)"e§L 2S&|ʄO #,3Er<O\SE*g""r)>e§L 2S&|2 3 R\;hp:~sU.Ns c2N$6bnb# SuN 6+>5hFW7n? Iq1 9H‰˝Wa| }9.ܫIn :fM=wc^)μOl9#<[8v/?1fM ;) vǤz*`2ة v*`2ة v*@2ɪIeST;NeSס02ة v*`2ة v*`2دJ`G cT P~[%Ear&uK&CQK!80)4QX֠|,,v%olVí@+ lg3n|F薆}zq]^ꔃHț,>e=b-y=FTolVC9X+ymѝ=Iϻ8Vc M20iv?PőaS&]jprygLjn^ƍ͌cHU۵!nfU;ª|w!?[Ǜuh:LZ?5`kPF,a'=b,lVE#j0Ď{95.c5T3TB"H&dR2~ة'K&b;dTZ~3,{!02+ag1 31 9UL:VƓ!i>!9U2L$u'7hp76c]\@f}ӽj3MD$,/09>W6_V7:ygu>hgsczfWM2ka!-ϼOW܆ш7'afk4k~RܨkGj|S_:ց30@~({ ZiʾK_Ιc/9JJu$JOl'Er%?Iڙ4-TRTIxSyPaWu9XzS #"rA-}+)"xpV5\I:1rf%:0BR-+p޳m-W~N7)E?l <Ȓ%{!EYE[([NERgyP2#DJiB!:G5eKQ͕sƇVIȃ# '76 \"E-d7Bn9x1ֲYA6Hӛm5q϶N>l=G\rVɔ3M$VI)鹳xhc:³`C:|ێRum* i-y BQƆ 'Bk4% (.h}< ƋRr"[U-&_?.=*?ņtgȎߡ::}{?S?N߽FR`V7`(v Ơ-mFx5hn&Mw pۺwkSð4q~R|z !Ź:_䊏r:_[MzXF1?",(6VVT3Ky,Ua'y#f#AUN-o 38L6E{;}ŀHzi4>AFNU>wN1E&"[PA :P^)>'}mvM@ ʹ'Nh>44{_'y0oTDG$>#' ];m괲3bKmKDCk͆5tȽ^+/-.zDnZ$t\ҨLj;WQ=)~0 TQ_8(:KR_;7='$}v+sN?J"FۡןlUŶn+\#F3Er5_Rx P:W)+6jxloPyps$tq͂k+YyInx%_C_EQ-~8k;P?lQi|@qEU ;jko08ȍZ%Ϩ/w{Djڸ6jڸnϧ$䬫jڸ6jڸ6ڸ6jڸNںڸ6jڸ6j6)6QujؤM5Kǖ P7i'{%1:xŁzN PV+[ךtIY1w+3Л!f3U[5,{"KoGщSJ =TRBʄCtjʄL0b\9g|h;W>p{"b @# wr.M`Bˍ%(gjD; h $ .$Gu)< XK9tI ^ͧrfzUe6q ֒ a:w.Ն 'Bk4 cQ@q%@ @Kd(dEaՉxv9KX=܄G6qìSQ>bd 9'@ Ƀe!"H t,?4=iD( %Q!3\ACEv%HШQ[dH +H}{ L bv::N!Xn( 8cEx|FG>ɤ8)ts|STޅO9yuz|Xgo*{yPa41@>[AcJb&R8̯& 2řG|XUd`}F-|8UqR $skߘ}j0ȉ)\PNX?jN܏*>ʷQ@Jz!Dk D7^!݆4i -)U'Kڿܯ!]fસp)(۫bR fC=j!pBѴ6 A߽5ݗZv 8CQNݨ>)nꢺ<c:;[8Zu"$Ȝ4~̜k8!f8~ Flo!`@蟎>l RhWFlڴy o6o]Ɣ0,Mn)@7iz NV#wmI_!w0`N7l>bT8lVp(9$ds=UտG\Y] _Ao$bgCT*U5߆Rhs>)zP @x(o O?h-56Y#H+7H8@$]Y!8M!D$B3Qz]RVHP3L5™J*-WKX4sL`).wK[@;rsV hcV&>~4KgA>o)( a'I: C69QD9Bً"&ZhrOQi`SDD(Re1 b: *ÔqHHRРA_kl}yu7/g'𵟜 DSg`5"Mw 3(r!"' v ޞB3}#)>'blA[*-MBq<@?İh U8aṒ̉ UWG_':Y}R v9õcTdm24D ʂGʽ媏9Oðդ : :11p%'m*"ѠwѠ{;# G~'/mͽA;T@) 7*|,>((J{;XK(fK5'~?b5{a G>BUMt-1s-,Al?=6 gP:ϩSQ71诓'k49jކܼ]ҳ k<í}^itXvf+e0:z&| (25$2gd:%"xbAs̔&C#N}n1)ajA׼$~} | >}b!goK.u|w4-}6?>ƟF uQ(I'{3]Mz ;:7a=mޓ*uQ<Ŀ-^?).)-ຠHDf[쐳\>8aMub}}t3l Ԗh*IR(es+rq0qI{F[ A(czfl 8 TR$NdIr.<Ş8b>QTURahT׷ȡ'LnƉ8ndTpdD"I ~gezcMɵz,M>'>:jQq].Hs(m(L P'&&aaH;!#AIJxJICu`g׿Q>Ra(QrXIrFix8$Jpm[AEP1#= lFK` \7(oI_$RJ|, l]9ݞbbbwA]3Z_LLї컼g*ik0.bSZNA®rk:!K@e-%;8י9_J~.teLt.*Fj&&8a.x!GBDX &g F8>;إ)zϋ:}k<}Kl}e!H!z)׏|}zC/4ޢL֐rl϶-Cv݇M1ź̮Dn$ Gw6nΣ~+ +ݨ{:GŖAlPmD3n}Mȏ7o|f0a2o9bqλ< RQw- wYN}qb ސV1gzoAtRO\'uNÙg}; O0Tin_\5˚.r=G0`ΡD#EဌyPn{;$^<-Nޟ3g-Ӛ `1$17 >i$*Djv}ŁI=Ԋv^Aç٥8,liRRq8~HhWV5x , &2Vk9(BNruCNhRdu8ܘǹ_Q̩f2i}p,eI%FQ($LyanG8'&gCIDS~˫vPu][ 63§ṫcNIUÉE?~۫Am\[x6T{' U LӇ_~>b!4ﴜƙ0Eo[#*]|vJqDapp ,5W)ϟ@BYxh$arZR=n@r%S/1Vrx[_]g$X|!&/e9!kfV K:>Q\>M?  !F^_6'4SA MƟdrAivtAE;ԂwPG(W!s(JU겪([.5 1Y{/XOÛ0&Xӑ/5|Iv)>@AF5GX78}60'2 #ƘC QlMSUQU*i/UP>bz lGV߆ɅEV,n fZ\PJ;)L.Ҟ]4=SYk肷mpi^)zk `&Xjģe1j$2F6샿N%u0(.p'`e ^C׊/AROEOL k J1*=V6a<hXv:q2>f{}ESr^\~rm1[hJDe얠"y@#ZqCs\;%O*hIT/|!ٔ~-ɽ2*s .ycswfL37}7}7}7}7}7}.o^4r?f̈0uօ^.̾yu^U5 Zbo~3խ;ξ#ƯʔSu'# [ 6U¼bh0q8KR ;Nvjq8w;N'_B/0/~|{;U Tw3r>?ʹ}b2W8̜RnP3*QZUe0~vTlإ|p0$Y@6I4%Z$'YS6?naP]rPXme*E۲4<Ȱ豆"qn:)D L`Wc" ˳c.OL`nʷ_oW-}ɒch[M.'P|Ey2E ݭ*xض:wqNa7P ݀U֮L}%Vݭ-𦽟~bg*[֦],qޞ q `dcm0g%be=+X>BT"5}.[e@ew5,:Ճ1c(`|aKޤ(8KN]iWZDKh>d'L]GXUGs/ZǙ%qK^ Yr!c_Ȏ~{2E6's:p^}%]NCԶ9n/yMmya=WGivZg\%(O}vԃP:~|ew{yޑԜN};IRN1sOd'Vb/qʹձ\CKgy"vhCkpiǞ1aq}7y=6ս_vï 8UFZ 䴳FFC4:᭶&H\Ҟ-QR% c13wZxv XlWDWWUݵ6|Iǣ C/@}Kv2 4.F?s_$4V\nmwnMۆoF~|{ >̮7pn ~en˲^AO| ,eɇK&a0r ckϘ|q/7MWd\ k!m+.m5Q>ޔ&Rݘה\U(0 ^nRnVJr7^@VP}VN[?s _/U`W<[I+2Ȫ/| }U|'[xUɗ'I<\oVR[GFBخ@.J$f@Y~2P¥;97_ƻ^‰eT,|?EЂ.Rµu:u9θj׃^|SõyR7̴^U~Snj׳+?ꅪdTT5<`8Op<5q湚;@_ _9wI]΅y.EZOt<*n=4Й> $!nEcdĚH%9) agt[j}*ZZx!iE \&tqZ(V[c)b-)Qbie9VNyDrRGƢqQ; Fn-R1ZxrnBfzFmgd/qa{& )`OF^*5}lʩ]>$i>_>t.Mq>P5$9u5%c<01`[GL8*Ԥw:ʕv^r[ '`֊rvͬJB0`+bBh(C {OƑ_!evT!Xd8I0C"yͦ(M6jIۀ-]|UzWS2n$FǴG<g4De3PhV &WDF4QAJpDRQ Ԁ4e[NK9+_iAڈGj%id`U1傧4*^x ` ^rKEBO?1#NSXi(zbi,j7(8p;NYtg*ZItGݛZL-7~R'ػ Y !&(1_#1r& K T 0OzőEq4%Qyy^Ǧx<$ƬnX.8.v,|IuЭ|%-VkFW! )X+ql1Tb#<K€y/m""4ڀг"|X·F!|KxFb| ;P,!z(+c$\#W:pf_ct1K3!`HQYXU]Z:'zze^4$R\]=Qm͕rv4LOoEZe4$҂h keYd ھ%:-@Y/8FM#Xj2(o*%C~eY<>نGm7$1V8u{$">pTIv=vX[SAbA,xNg @ 2x BhþG jd >`L34 NTXĈ(  h0K+'uf,qST#ʸ_|*TRL7q ?OWߚ6T@.f[*ghil .%?2F׃A. }.tʔJQu^NWXsO\Q0e εVbj >µU5i9zP O!F R=y[>v1X~f0ԫ\/q ?\TkAx2q1/[!wƯB 3\3vfl4N#hfYށ0 0vxt]#zv3r8r7JYl]%WfC41d, &q)>Iu**x p\=@滷ߧ}{~}{1QoŽ'<8N؃;S͏֘櫧Z|%[[)rrüwG=tfnD/ߎwÙK7\)2N7&xdp%u=_ b~YBurU\fXV.}By1iT#ܗbߧLӸ;awN$r'Iip+{S={K%FE`:'p;Imu3UxTŮ뗞~ e(~h>C7*^KSƘc|z5ꚴ)" `BK 1hy9[cR;ĬW0lY@SJx-`W6Y- L6bU_I_/57Y|mL4|ƃ9KgP>`pۀRw7w&Wmq<ۅǓʡՍb4_ktj3v:Ӝa!X]ګ4REɎ7e߶n61-Z.}d> Qz"k U:j}wtPUh::F=AZ뙛 3M>y)^M*t~崂|RqwFjeˁDwT`kCuGM љ03%ZB]:|]8+,u6GM)$}jwjx ӮޥS,yD죖֕\יr}pUaUXt+,lyM/TsQm"/z,@2%cXJYU~>[$k@_A:\Xp/SeNTY]tZUu*8|ȌGvŻwú}W 1қ K@'LA%` 'l\G/Yfg5#_.?qb0, F6ry՟hxM%͌,Ɓ$k%:6@*Qn/pB }c[^μ\|U/? .<|+InEӗF9&kd<݃ C( |'ҹ"׿imw1{/VFZ}x `^po{lw0Ⱥ oD^JHp⃲81`&u9#ςʢP`mEE潔VĆ)Һz*MAX)1OYƭHqQHB%A`BD"G3p/Q"B4JqTQܦ Ȣ`;X+\$JKF9͖m9l[ٖz<[ʣH͊{c簄m41^fiA&'XL ƫQ׫ߩ 3[SGٍc5c%kR΁/GxK D9ijJv$ _L}@9N_xU6AMC^ >ykfP1~ _O?j?Ve[ڑe@.ʕ Daei ~)gyk9+;KYe׽fi|ep>HS?xvWB?Co|c< +|SM R7,{ ~>* o *g'?h=AMrV;t<L0x*B=`F))rOObZ*@/O+'=fb7l.3g9a+E)mb*[0T~to顧t>8WTՕtzed&#QHYu2LwZ D佖豉hj4BZ"ΊUh>, [tj2lrQ7/'7ܐ׎ozy m};L~rz=09&.^ΥUФW&nA']Lnisq:Ojn@WcvXFCwnE݋u{C^ҵ0j9Z} [YW^h-eu[ľ)5ׯb.ֿ!l9QHIlwp))n]C]Ev4hӆϋ*PK91@ά(D(MHfK%깹L=lLyJ5nwxQrJ%H ) A[Œ6DjC t&9F5P0T+6^ԖiO)JvEq35իUؤމK:et)i*f}{][nn}|ߧU%dݭ㻲I|ƶ kXmcŪ ^HkvڵǼqͺf5t'ں*w_ϝ#OO^/Q}+L1S+/;!XCf'gYD(q\ID*%^E~$K)d&bq4? ϣV+'sMަ7:MWF#uCY(岿g_֡u-Js,AI9+ @9uN`(kr,&,р7kKǦ7H:6um~ob& 71ߗ LЭ7{M,@쑺[7ꪒr_UhU5uՕGlG]UrL~{WJM]GueYX]܂ut;M?O׏?[LsRujo?oFgSs󔿜<iHO?0N=Rӕ=*+EMWj캚45մJuU{WjJKM]GueV{ 7Fן~/G #b~QWxţnqh޷B*dtE)'K2X te_+ңU٤6/)#`q[؄RؕR*?ǟί0/&Z7~z'o>|/=1'7!i<7,to#Z0 Rb봇X6Ra\|efb}HB2940׶D^3&1$xj?(x[4>65B{9!%XcҘLlU/ 3ȴuFFô"#1ْXBN$2s5"Z €Vk \ܐi#L"OTBm4N'P>SգB ))Ƃl4JL{Fm+Z)JVJuR05jF@@*a);d$JF 6B2ڷTUZ΃~:*cȐ]PkHRJ* oRR7ۄP6T UjCP6ToCP6T Uj_P6T UjCP6T=7 mR߆*JmRԆ*Jmt UjCP6T UjCP6T UjCZ UjCP6T UjzY6Taa UjHkHkHkCP6T Uѕu«8:UrK'JS t`Z'w J@X-ro= P {YRW]PwVD~IiRs=FɞWi<m ' ؒYڍEBe+mȢ_Rt#@])iHl&7OJԵ0:dZ(ވ>9~/jsvizNoL|gj'EgyjDc6 .佲PJHYBj'!me A@bD|JBa o͞dzG24e0PB}-tV־ܞ뾅'8IO/ӧcs<@Tx$괇dlRj[+ڵfƈWjxf %/HZQ7s )k#*h!B1ն4cR)r>'(,\]&J"GDI  F]-CL!-}MX2/_zrH7a~bEs6ْcaxy7uܠtG(oA$._ߎ`6eJ-JYy.:%(싪:EHg2{RYPnM7Z0 Ij-oAn;=ĝ[S.,$^E<9^=`t”@;9/|Vsh'J"p(d/4* (nP칠_]eBȝ | .w[P<([gz!:[B!6Ϣow# ޸٧OU!qz{~uqNdW[V+bw^o%"TSQO2C'q'9BULN >M|ٴ.i;)1NJjg(iW4RꮘWh2X\^A̜,k>t.++*A_Ygy% x?Sb|NZ+֪TLh7SuōxvOa]c];ism*ՆgΖ@3/jV^%Nȍ\nϪ]!20%B:B俲SU٤6/)*ôW G|-KC3_E6n  ?Zxxl:0-zڶ2۴jD ~u+N=a8x=C,UnڈuӲzVc1eEi|4ee/Yt`Εܹs˶hYb $=Olx)!hжP @%!@Y;<3Z%x HQ f}Ezc`)+]eN<'hoQ j"Yc6&jP3&rl_N S$\ =۔G&ܹG5gW1/9;-_6/WYgCM Ux}uopTLH&) AEưmhYq``]`),HAА;rWo j7*b%S&,\6AkbMd)Q9l;7+ 9 u8,Z:Iс1bHj;50#gM l]!dUE!P֚bN)%B"u)g 1ooEgz =e#}$A7#]a\C舥]H+L,1⚫5/O[k"sg+S.񚌠}Ae%3 :ɦ8Q;x7LOSԯKLgSݧ˱71 P$.@Zv2/Hh&[*6 ;xɼcTGo"t*  lZphbyœݕ?7N|e½ǻ(ılrtRBh38IZRعXRL R#m$ *=8wnCT@(g!%$U u7U0Yv 9$=w>9k6]J5û]?tk=FBIRQ5,S&( ,#\x)q dhv<9vI otjRAQ^swE%,*Q#I ^rDGh l@mq=@~J+ջ֭d&IEy+Ť@Aל QM8C.2e"&[qlL:@ٺV:'B  %;-Jq بB6(GJRh@H}{ٙ@=&XY"ȠY^'LpSvSkVяrӟ|SثkGCfoSå)䁼G#Fa藟~qosYb2H ?bNN4 @:HׄH@&z ȃs8-~2_׃%+/%T5KU.r𙝽`0rv?ǟ>V JktSN#G;}D3_Z[p@Q;=ĵ[ί^?Eۉc6UMjutfg#٧{z?~uy<{E>1(t|񿟌ɧ)oBh4>?{FJ/LۼoELdk"D<~jɷN?Ďd,*\̪O'~3 %l%wt kmFUfy6ѸO8y.> /. f+:h{nd[m}UB X:ͷXcFr/"#zHq}??oWJ%ueQOȎ?8:}y?_ӓwpfpFIm;jד7k7m(t445Ӵ_]FrK=Rf~V ś‹hu&jă "4 ٸ*M9yUܤU YDfAt!GAu6|1J:85A$? *D!AR ܚ 1X !R(&"D%0 k7Ds_XrĂ^HI sQGd$XP`Ilɡ^vOwGtk\/qd4v|vX1Px /عᮚAA힔<;xO^C1GoxrR>8oǨ|~2voGq|߽~恹qيJAT⎇뢢LT%V =JjU5rk[9׎H?}bA夃*tVݵBEᴗ#|zscw@XɆ[x[?~VmE]ͻimq^ofѴ@quQǵb|tX 7-H= !uO=6'tn!a`?pGiYzgK}C9Sn&(AEV'4J@fVD>zA GuF= ;p%{raEGC~H\kJ=!&Z wQygLD?ņ⨘a ޿R8ȕoo*!mS_ϣQi湕w{љ|14wW,|>TC]jF+uow R5&ޣJ]e5y ,"&Tu @D颶 @绯Xj,uŠM*= }zIޥ4I[<皷<gHC >J@tj)0cr[|C љO7+x xztMl "/-f0\g==D2OcKWϥ˺q8G{*]?Ѭ*Ē%,ܺ~VW>FWJx>QZF=iFnzk6oje謹;a>_Yn4pMs ʡ֤B@vT((Zu4@kԚxWe 'ŔɅg$&/PA@=DoJdt`#X )1j6u9XWDAJ8=aFȶZHTrIw9J;}Sͽ@~ݺ5dU#[52R& aHb: 3qr,%&8]&߷ q|ĴGgqX@"7Zh@:mQ&ryQz/.sA1T7Ac%AB2GaIhvvěTrް#MZA-?zR'݈ҸYAC}2fF9"$$q#WhWY;'\Ekij4Rj%#=v<~G`)^FT=d2L\iRE85ť>O'i|Z2 wh01LsZSq fyЉg2OS@F!}$*eWS]-uߣ|&0[ $yC1K &hXt1G Qc;a[>(M48 Ae094 !iK/n&O4U:vD"XՆZ kݶO&OPY6Q9?gF_l47U|},Q8?B_MZM*9Q{Z(7s\'; 8c1gND!NmV|va@"m_Ӝ FE >B9a𣒸cqhx憗 8n wo-I\qzBR ˼mݧN|O268B_++xYJWt rŵ>Kulb?+SoWՋO'Wmbj=zlY#Q1\̪+'~3 %l%wt kmFUfy6ѸO8y.> /. f+:h{nd[m}UB X^ 2\E>#zHq}i??oWJ%ueM?!;ɋW/O|/ߟROOyrsq~E۶ak~ێM ]55Ms 4|h.crف_lCb0c||V'?ikF<Ȁ+Q(B N"9~^EE/72@Ղ**ه|3]ȑv|P@%ᡪ&2m~R?&m}E *D!AR ܚ )HDRXZ;J642o4a%j`Qb Jxw "%1E`iBURu$!N':{O<MvҍNg;pU￴#%2\[RE[IB|w ^ގA^ ύA-!rd\-u\U_?x`4LT8^7hZ cl8yc]E&zvqSZ;,KO_ِ:`qͧ#}t>QĞ'oٿbbHޒ.JT]|'Q&ʋ%MPI |n,6KG#k}d>vdL ;Hc??; Jr694x޹2J$F (̊h^";)w~_YCHR?e%oJ{H#H%) U~utA)G^8Kq j]Qm$9.e=K]qvXc:)غgݛ'Z Ls%>8sE#u (EDRBaЩ ΎɹS ݿ|fm[bM{&YwF{A҂"ͱGG;BG ;08g.qq@$Rr] HH!cS @iXdW}zToz '$uu֎ZtF|,| %KH,!:WiM$ BPJR=--;[#[ [9)G4O^ZdVlR L"I"⮤ة Aznl $ oP IQ95o%ul댜p(k)8|ݷ7-Mm~zsCmCALcHWxTҒ8J'|4LjøKĘ֧ 6 O h!Bs@ iIU` 﨣zChk,S/|G6nxL@'# 2\@s؟5Ap8,hxюbg܏8?_i=/:F38˚X;TQ%]h HRzKpJ4͓Y)>g%% LZ/OvS{},mң{2w}^%Ę2sI(lL]z0zdLOk +`p{N`v1K,EzmC}nsyٚn'8'FEcQ>oB ӇI_P4w٬P2zEi5[(#$)8>jKRjOs9k1t(]g^OR}qG,שkL~ݠ4p0:>hd@ar'ZK(Iz X p22?t**X y{=[W$a iU3pJbA0 ('ֆTG !aOzeɘV!!83uS/IN Ij$f.l䐱e^zI%혒vvX"u/ɦ>uT 1}AՓ`S=9/zr''*'ɚ1gZFzkke+YXВYK.mľ4D ^`?g@m8-#Wp|{E%[k\ J̞|njAMfwAULӏ._r^ .I0bQXB],DD3""&ZH0R- HgKZ3؎%My͉] 9tAH]M/e-<Z9 LKrY ,Rg2DO2xV{)) JnHb΂wDðq$$ Ȫ=ݵ,?Lsw;rߦu9K-6?& C7Ex =tgP7ħ\\Vw/k$^0O'+u/ru}bⅅb#Bz0;ɔn#j/mB4{qIFO}^˝[Y5:m6^mq<:7n' 00S»#ИsR>.l4m9OM#d S;i8.mn<1jLk6Quw2CuËyay'2։I}=jFnnCFʫ7I7 獥b}TEx֞XAGh%%L: `D8#Kh얘"Da@NdEx+Z5IX;Ubu^WH!zzݟ : Gï4)k-&f ؖ#U`hi)1H4}!ˡ΋*eyp Bym௴`՜q@a$w\eqJb"<b!3\|0/r;^6ҾԔ|]$:`o cy0QiexRJ]^A+0cE]S4 >?=PDi (J)P(iW̽g=wՄQ @U؅+58!'LaZܧ5Yb0lin\*"?!xDK7x `,&_ 5fx~~ fh[ZcFvR)s߭Mhb}ֲc`~ |M'LO:]O #K8OR[xGzx!$e HR ;?hU}n=Yѭ>]oQxb:QXf+=}>P<+WNsJ u}~ 6qܤ\SﻼTNk^-JR˚_s4e|*SfwL Adz-s5!jm̚Ǔ,-MK.J*Æ#~aKnӢ$JђdKk,)QV8d7060/z Ξy VDApo5[!TZδC^DR*Պ𱌣"VFudXDc츋Z ј{$9@Vm4̸fHg6vI=9;&h_ncnF8.Ơ6@V&|ʏNI%\uiLZTij׸PkT *Ꝋc-8V9"y~):"JPc"z! xNi p'pV=zGKu]/FSp>*!L\\bv8:%؀VA)&844* *%h@QFG 5 T!Y]Y@',v ,]|]S&Swu,sFbR\!"'ZWڇ\:m堽Xr~T*t^D_9S/:Z*Ѫ1NeAh2#*)d?Xs&zGe;VJw܇ ĵvyƄ"?0#0|7~Cf )񟇻ϒcgmI %ȗZrV:Afx4fw\0sM5a='sGɲxb9{)Mt_:s6'{wn-u]^m”;b}DZ^L}OK?o:Ƶr 2qQ>2L!2[$cÓX5D"$ rvwsZV]ڢYJz5~VH7e$>|gYz%KjrvI‰`).&K*v)\V}0P)p/#9θU"X_ruD-箮D-+)  U.i/}?{0gJew W7"bGCKfcd8/ QD+(! fךSʏffsW\:VF,0A.kKBPKyF>bm#!OwF/o &6F"Z[ Ĺ2k,tHy_t@0EQP!!?žFh!`bC0 'YJZ5Li>p)}GQQUYb W@$Q:Bd9LcɁC^bD cdJrjP%a DʹUJ C wEjACXmZ-sbx^ Ŵ 0.iuOi|ga >B63-Wi|N?ڼtK}9+^##PQRuVU/lhT|O*|hN)[t.>=ˁd`kp)BSϘ|IE NVF߉ԅUQlL y.,K{x׆b8N|B$i3D>T0ȕ{WQbQGDurТe'Et HQ[?y˽м.sXٴQ?$\,9RKjV- [N5TzLnL q4L Wfquvq{׮t Fn>ߥ8Z:B堆-|?泭0͞qrlQ3ק=`%| AHRVa S`KƌL"^ˈiZhj4BZwn|^UC<޼L.b>t.+|n`HjXPg' qfAn$*Ű[)rs]D*L$U3IBaw 溥G hjVG쉅߱{4|TIosY+o[KlmyR9kؚRC&V.JF( |VN9ťoW12#XPD.-kAQSZ-^OR}ZK+`ӯSה#X?u ϩtbjgښ۸_y h]9Tm^6ڗutbQr^L]HYTl0_7ݚGX~D*%PPe]:YV‡:d*"-4p|ցA~G+ƺ7R3fe*51D@e] "3''c06`;XR4T2ZP R0eɒNϙ,h: >0b6 m@s"U#@:(vjU qWbj7/慗 }[_}W ]5v|̏=5B:- DوdRلo<>JEu1 JwLc+|k,2|g8{mE3 kYSFf HIl"hXJ췒(" $CD]P.$dF) y/xheKR:b{rA{-D(>5ygyo bڜgt4(heʡƸMh"M@f|hp8z:3mz}CZc րL>  tf~^;Tծɢ$!2WB+ϺnMEIbQG B@+XD!6nW)⏯DN5ܧW/Aaũ޳f36lf^M7tchD ыŻ+d䌓JPdk"S:"ohD$xI !HJG^TL>H&619 E;4lG<;q-l^߫k_oک5abkmW ,K#u X jJZѪ]GEo wown]n72GzJz ok̓frSwf>_b@]s%KWg8&LYk o$.U]d:/w6{.?[BhU[`ײhw psNe3vTJŵ r$]Ff%j3􁛪#_^|]uToS=",:P/3,N\.[= dy8`ޱv@ x cۊCw^}J Q*oָE "a;U |X"P` -"<".ݔm ?ѤX0:Ykl:rmI1h𙇴ƻ<1AV{K͸ kl.&~+I:^tѲ/UZSu}}˾3ԡeUUv]ٰ|q҃o k/&,€WB@HwX+ e`$-:X$멞g[x<rako.bo-K~:Bw,zD~p2@'mz)~|C=xa})&)Dz[:P^$VVAY!."*m(?,;sX˚5з7^rD簵߷tR1on{[R3y $ƨlbɃ)$xL2z/3CMj}eCe4mvŲ3oz1؞Im͚ voc8!Z惾-VbOux1t9X^ SlqVZ&P;T+(=L@P'oP( ፓLR",%&A(dt.lsZKhD e։JI)%6=l .0`e)O_vnSVycHTi|:Eϝ*exi!Ė,y_1JΏ,ŧӎr, +G9<[5ְL"40x(L!\"dQ?ulYJ `) w&A'7Zb΂,P 8gTyp:Q;6F2*S~(X<ł4Ҵ9[zpsϼ>APB*Jr \){AhJT$A"ugEb\UVg1GYZ^~gٚ8?"OWߚvT**w_,Ng-YtP-b(݌'8)?5lTOH%R|r5'q64z2&)zmf*9$"1KeZpf5\6sxO_Ja5H8<]xV9;.hĘ]=vz]}F곁vFiz4o^]kOn_'fS`i?Vos$M/?E|=jr'4''m鼫 ͺ E^`Ū/ˉ}s|>pNv bsˇYFzäHذ*[htQFu8'rt1?m 7,l<__x9c>~?QXg2+!x!iO睻R5ZyK׾ϧ^L;`֘mԇ70ſgXmw~7\Oq˯z(/nюwGtQslFltv0 ]>N۸lqy8ѦF.$ 2YDO  RH9J( qklu*.0I/mXyGxf"ye vڙlB))#yr/ZeMuL0æN/:GۛL|'lX;]UCa◔tV<[2POSqRΉ+'n}3 F)mBԌd0 @ ;ndC8.i}s*iqGۃ|j$S{) }d&9 SN!fLRL>=E1rUݗV3?L]w4knWZr26Or w*W?7~^>`4u6 g{dr\WFmKf5E{+/}e;01-^˂iᔙU}!6l?KrueuF$єRl MahDl(D/ ,mQGQQ FS ƧY'`ds_d.ա*JD `u4`DԫGN#A) Pc+.-Xwm{0`,106H>܋觤",o̐CR%5LOOMOWթӞbO1Zks*ʕT;guF,4c(<_jry֠ZIWpz-OCQ `2_7[3ɃCg*g+87M@[|II07yw h'?_% 8So6 i3LL԰bͤUd2NIoLb\`+=|q8ݿ+Duu'?4 T/8lp1 m|mg=aa(s pymhs}UWr'͙5;Ux0[Az3z5I/_1߈Mfcfx5йU 6xhUNs3u {rk:!K@e-% o) 󃋆E DvQb}ds\ F[n5‰R$8(i"'%O | nmR$vㄹsp M8a)5df\8L5jt|S`T.2ɏڣBNfoe;p[H7o&x+ (>s޸]7^xig++R&C5V7hdsܶz[zĀ}!ikmڡu5[/s0YjѥYu-wlx|xuMF -χp÷ϼΜ^A-/]qvoͶQkݧ^R$=ǟ5]}mvNORI'`:bVR{J%ʼ"Rr@Mǣ~ 7A CF8%8ZS+ܖvu5{'+EpՄoB÷Ul RR?vJa.MRLbYNj%۸"pN÷j+A|-"VX~БhzDky{\Fv-ḵԮC]RrZj^a nGb@`Aј,.bRb^RL|fw8I;^&XW(y7Sok~2rJo_SVS7PANW|oe0u8eKe@tI-\ԏJ~뽷 / Ga(,pQX8 Ga(,滎͙Qj Ga(,Rmp,UX8 Ga(,pt4CGz}-k,kwYӤ5{Z$),ֲve,kWFu _r8x4 ?O˫lLua.tۆ_+15"`_~\q?z۫=/rDA?Eq$r4&1˿6~zN?[UeOw#A}z0{-871jwoAޮz帴B[K08N8/UE}Ֆi8O WE٭Hka '=6 gP:ϩSQDӋZ(pbn>(FGτV()*8DL!NDO`%D fV,!g מjIw^&1yt|Jlm1I/ٲ=eˌ\ly 5~ԟ^_Dii|@`87Y"V9G^XP^Q t] eP6u;_{qa`,"] ;8>M'"<.,i Ŵ; ?81-%ka^S9)bP0> M?Cz\<:y`Ϡ\2QNazUnK?8Vb#p(f -G@oei͉t"8]~t%jgv+!7iUR;'w lJ CԨ&dw Y[%;90Cg^jBHZZƠSV ddI`?|]kjםӫ-=}\䚋l'7=>uddZl@M;Y a6P̜f; `{MDn2gF^.l؁e_%!N `3%@gQ2j/ׂ@IGem !Vt 1 Ε1P$aAq"QFI{+f>tc-)iEӊmSӎZöREHo}s/) $g|;/&<Ӗ^iQj N8!G P6b5 vMՇ᰼>K}zBk!MRQI Tq;h ^3A sk,U'&3*d;q2r=VLQ'AZe2)Ĩ5`8\ 1PY[Bj?B ``h9:],G?͒\xxP1gz5zh,"E#x[ )̔bz0WQzN81);,gRpH(Łqo3jc"*.(CR08u3q֬yAHSyce$7S Q,h L *\Ir\Eg`n0C6:#UX-WiN"(/gy \pI1Sۛ>R7c)ZPz_G(m6}e]eӠ`2kdU 3PVge c4[޶vp χY864>ՇUPWw97`vًvB 3Ljn{cկ6?Q/A4ptj<u4^]<+#5rQt=Jg8ԈGːc"I0U[4"Boq#xɺU`Rc n 1H02/dDX^+ @L*&&qN:ltw Y17ηm]~O;FIIR4 47;% XS"&NVRE(ol~)?S'aTL^HQ{G1~PBHdgn%y)v J"Yjudp~Bx@,&T$)E,IX,juVxYG'ct[PۺFyjd]e[&ErZJSf,3v2 glHı 6;ZJ,aKPʧSQ:[X1zLݢsIƁ} 6kpߧPjGoβ(tˣ ]B[AJ (c0=4. گJ fAnyv[i N*R7q3އ*envV \s}`MFg~kȘ[;_x-#3Qm)pozivvozz 16To/&fXfcvaAY?,wӹjnDhŤPo1X3v$0G:[7Z;v06p':ՊD'ܜGUQ7nu7u\&\F篣T/}1ި8G7n^/-L/NR;foX|/Wߞ?~թT8}÷lq?֑ag~ލ 6:Hjho>о˻M*-s ƴl?-Ln+@Zt۷]juhSt0ե ɽ< b~6-H-UIUTܪԅBTa믋 +q/]G~!$AF2{",u YK3*DUr&%Q2d&ǎ6unk=쁑$ bBFX+B5[%r0ENu*3BYyJ7nf-U_ 3[#MJUϥW֋}U pR UbLc0跅JqȌ0ӏx:c ʒzt-+җRDQZ~MJ^̫֗'{kF+ƃ~/q@kTw^m;ENkѶm 05$vdS+Bx0Jh r9% !z :la_ݼjI 0G [rtқ:.=0cj)`~fҎٛAD~y[[^7%f:Nz/pG9&R{#;MfV hHHHUsRګooߌCxFC;)@>Q5J>ʥ޸`o;uU ^i <uUUjUR냺 Օq;Dx6ꪒMtRJu5+V04Igr~>_~snٿF5὞ii>]D>kaك :y*Sq0?4S7Ǐ1V:oCʍ"5A65(\ZѠL$IrQ2K?&ht*$ͳ!Ypd\>%wm97WɎEu̎BE&*oUAM0+hk_:tH4Ds+Á{<}[6`,Tx% ޾ *X ٳK-bO$BCSQ@#>*c?ڬ{Uq iF6 5mY 2b[-ԫpxާw4l36!S׻mVwou~l~> l¾{]ocSOuj_5ëQzt;>-r Q?ˇ̟ơC俆zv|'[(rKq*k0 b-=j9+w|]&9 h0KAW8Cs)݌(4}bn?v)<;qP b(JV%Iu~5fߺjycܙY(i5>g'=6[A $s"3R2%1eERyA D* t&tOA+;o';Co_26_}͂)wdAG֝yPr*|{oȿ1̃7MmvrWκj l}^YYZ0`hmrZ;4ȶicĤC,1 0I$Q}* ơ:K3rF޵mdBJ{"7n-ȒWz Iɲ,JLYTmq4sfxs̙U-!~qJ'qݥ'y}}r'h_ ` HB1kq`jg|M'-ki& Nsѹ1S1^cà`7b:[<F NU!PM"!?vH; Ԙ&ơ,&([EK*RBXYLz!#"b1h#Jn!FΎ'&ָv2KB}~;|x*L\\bv8:%؀VAT-1VYP)A;>&Pf@pgl$*$;A Utȴ]4-rKZ d<Ʀʀg'+_\>E4gnN3w6Gn?aʹys1ڥ^TQuQ`tY/7tRnIU4$l}) ,uP<<M@<^&{˗F2{xY WG^(^X(6^ue准w^mdJ$6lBôO %kۆK2]7z<ȋ뒖[ n։Nଶ8%^Ui(_T>3XjhRm>7PT{fbKnsؘENYխhΠMxSFQ?&84wz'NPiX'"@'=}-27]ʉq88i!6 eYgj'V&ZI FpmK"+‡U;%|nrEVkqWd ::}VhSHG^'s2Hbh=D')yu0isuΌ5fHRPlt`ԜVȤstNIQj-j-JQTÁ`&A!:1QY%,;!4g:FB 6HZ`pp^..+T4hiB$ Lc Yfvmoneڡ6 -ť "PLN\6e+=/}fH&)~]yUJ "2FGi|2oYn}J{4e9V:πfx$ zD`(;QXr;o 0ˉpJ"0FY.:Q8P&y$,ViǬQ0( ,$‚焣ȳy[Xn] 5W?:ox-}RSFp)nF@VԫHθhCS=@F[d A"]PW+Y^G*#R"%1dNF03,*͸J3Rb,B' Ll?q *%C^Qo>4a= ;Џ`m7$1V8ݓI!*D|کгQؿ4iEX:( S0L -y+Q{F)T%]A' :fHYi7aBQ  hWҫOO&}NQīŭq.iE 쥽 9,|VqR(U\[@0o]79du<+-9#Q]?`(ESHW.'agX#S[tv%X,LQ;;+o°<%*aUiC(XY.ZC.ƌ/wnwY]]& i"ci|f`Q.5)[R ]\Oonϗ?ʱ 1DNN`y&8Z&3`>F&?eTΪ)_'f11A >Z[v)HƣfI-wFoB #ꎩ#1~aH0a#q`*&+> ]܏9L7LNQ <|M6Urb"b sIs`'.u=+B6YR gFxܩT::Ǚ^; Cۇw?|D]?'Xip`ΚHP$<3x4_CSvZ9zw60cƔ-Ln)@~~n.ܸo;?ijG+w[@ثl )[b- EG} `6Ju$ta:xf:+ I"-2Lr'ुAtR@ؠ^ J*f,LʦN[o$wUZtZZ/1 RmTܝƙV/m\Iy}@LK5Pn@WҊwҊΕFe4ďwwhr[qiEDl h:aFhNPv*h5nLr+pai.+xOwYMHM_bޞN=o!CxRBGGB[GjK6Vp(@ˌ(*2yKGe$ # Gz0sdrvm(m鷓ztHjxWrz9ٗMכ lc7ܣ˴]JI@ ޔ}SUgHM޾>`K1~ 7vJc`Ͽ^LG/0. EsQe :2] F/j4-wɨ ǬsV" ڮf.=?ˀ;pN_I _fR#ǁ0Qz&uD9h.yDcw侍ZFZ|tcϫ!!lGW>prbec;q GE N, Su *D*4Wks-Σ(2TJeݮWi2bY.d-]mK9INElMt = p½ DV (Pʍ"6+udXDcZ)"VRHDcۗFm ݡ}ŵ{} [E/XvNkwaq9>!! &a̻ y# xT^sL+bnۣOc=sw^NHA \1mZf H R`E鸈.t5e&#QHYu2LwZ D佖豉hj4"]82lNT[m(2[7nqK3Lb.p'b֏)L 6[]oeי=&"w+ &]^? Ȁ]L9 ZкjW7w9*th~ōYv-shܺ{Qw^gR077hw(V~ꖎ 7U18:pʬ7ԈZ|֢Zwb[WPBO^\Y2ًoQ\Il{q!?_ŵ\2ns!b|a D ~N?6A#)'Z}ye8ayߊγ'/,7(-?n"k7mu)$6pًEy$vfPq\fOh!->{ViaY^BƬs/N}]NP*NSWbY`? ϣ kݗzG/WZ%:,R-,"T'r|9̩Q .e9drLCK2%adsH,/񕶞wr|^yJ2,uT B֐ 6 }VGjh(N{BRGkTJ8bZ!SSxQf&hC5"ALjM<(ÁDVHurL‘ZŦcֵB)Ekpv=NnϻO *{3\Yx8ˉRq 7H&)$2Q 01*Q'UZT:j Rpc Pv(RT"`6r(Tښ"uI{STe,%T[Mg;?>Z#^ٛ>NhS6"pۮ6g=òs(nYNL֢kÑ; mJؑ@ CcFSDQ3-Yr3ۑߪ)N9ܳSfvʍ1j<mчKmu >0uߒAyM>$/fo7huQ˾3#mͼ|n<P1@Co 5A)gRX\rT@ dZJMV=Wmhi\;,&wǁ>(!8'NŁ<]Jp,+|цO yMZPByD%-Pǜ S)ɩ5FKI=89G\#rBitI4Eg~4ae[2NKGxDRX Xc{` $%F*ٗ(JЈ6͕ :rڙ\)Pm]V*NQQ_:/hl- =!{&{7/3k uҳ{r߶絯4uC"$Gۻ^7鎲R\uu& R.8#:je"Q d eҤʻpsn1ݐ4h̋ppgCnU#Y5R& aHb: 3QpN9H[C+{jI\1 M(rpPK Q)C%cTZJb^ϊMgG>k͖5G3/Dn c#1p,Åu&LM#xмQzӳ@I+wG"jMPK ~QX2'&17,HV9Nv8$-r˘/rD^c@x0nufX/8 #+F{1S6 dH@Dp&*Nl:B&Ke&q$MN8z0 RBp} >XKe^KXjˢs<@1RHp;m<8qPN32IMyXiEtL Ű:pYF35Z5R M -4EA''<ɟ4m sIp5Z*?s XI\ϔL),$m@x8c"s=$&\~l "Nµzs^1 ]ߣ|DmX-4WxC1K PMs tâI0^ǞBT.Qh"pApAeEdABtHZ:nt![LFuD0 D;eUa&@b"avƵ O&Χ|DO523VM^>hM q_pzy|Q?q<& 0烟NqОgV `gn`h@C2}9b ΨFC LarfBIXܟٺWn|{VQ^q%-jI䍣PF=1]JAnqFS,<)(NyʏԆwn9f"M}BlQE'4=;!?^:-WKA=q#^Qym 5mxC:0{<)2\F4]\.];$Lo\G[m8NB:F±:GHW]ðaìOȴ!N|̻t2]nt}7j4߰8EGedEvڵV%"ܵq#y`]q0M,6MS7t*&~Yt6,hvG>?çw~Dwq;0Y4<6s ?W;oW_:gZKجPvؼo_1ISQBL{")AMcssï6LXH-I7_~lK@T,3j@G#uq}L1.5.#DK VrJZlhW+qm-Pͧ!WUG6}pT&.~T_^ 5'\ڪJNuK%'/IļvQ >rh V#L$%AI)JQ)!)/dQrbF:mR$v]9C"e`T<j[#i bc!`9_6^_JQuaoWW6׮j^GǑm=WUV [5V t6oL"CvƩf6]^? }uܺZ=nw[Vزjun7mo-eCk-7twkޅ槓]=WtlZW}bvV3sE/Qxm^>׿>/afD_zHR`ȋX#{1-F6-[q3t׳q*`UEV'2J@ޕ2Kn3R!c#džq|zyM IV[=MFšHY{zjj*c`pIea +- jeؤ+=NP7@ :8Zo[1ǀQO#0 ͔^կg(= fc%=yw4T| be*(nލ\9wU7}}e0:hȥsmGF[!m!ův j |NKjnJ;H#HeWۃT \GJi|oIXkDLB:cJ!,=Wz%05eK-myPX!m)w@r\; iY0U29DǓNsVP$QHh5鬑ڼ";b3Ֆtg^K/Y}l(~=oϩv/H2BjT::9:@#.x= |>.*O'9xx-l ^6x3'ᙨ,Es.K%w;%#%I9 SU@m@x|zU+.p~naglj߾:9PTeC,%%3I3%UbXRx (-""*Jp"{l9d=15OFK`m{t3Ϲ~˺_?rZk'WeZxINKH\Og0)E(6sKBk%k] ;y{o%]O{u|7n+MW?zbee)'ܻ:i__P~{B[r3< a6gJ@=,KtJL74 <_TMdJ@KKod0t͞U Wi畞#Jhi .CDX!*(QD@N9'?h%TIn<ŮH;GgPscNq: poÜ0|q~_jLmz;zZju"1 l)ΫqN;.RK5bWg K;m5̦1t ԟ,;;d,cD|*V j4c G('iC7ù_'(oFp* wKI-ӥTt5[/Nf AvRi ]X8\ \"g;hr~qOI4DNѤ?> k]CΞC)5jj#%ytSq㝦WňV3o/.'^.'$v>s7Hzm]풓 .'%|q084c5$#7 D5L,i# |̫pп-x1f74mGdӨM瘟b>8`#y`(G_"#zHa0?.>P4aA}HBqt7Ϗ/N7/^<̜ߗ߼@'\  kO&`7wCC[*Ǜ ͍Xgh[O|qm}\i7ʭOŻo/Uo\jUpZhͺP"ni#{bVԶ_WgeE]U81SČrT ڤ1}d,D#;9U1+,1My^m%%팤6d^YϿѿc9GL{№AGC(E,`@Q1MhJtO&瑼xyM\/&(ByOڍKTKFROԾ%g=*!*±YЦ[69f#5D50 GfFȑg\Fg}D DF9e%#O-֦s?X=v~š , 2^:} Muвi@Ǡu^0u22n&2:^EcKJB%fZ=J+Hfro;FO J 4 &S"gPO!Nu~^eBIVWmb T{s mc[Ya}rV"Q_zi3p\D6@=TeCە,XFHI(#<S< ҳ) <0τ6^pqnk["x8{AZ3t#Y )-I,:OeVԡBz=K)L.8K<3#q824uLj’`B`IAL2(!ggeE T4̳NSel,L;@UdPypsqқ %:^2G:lJ{C'(O|}٢.L.Gq-|qyR첮U${^t 5|U/WwOo"[k;0۬yzW{4m(<vHuƔM27;,RlPؤA~h߬m <\*׉CZFkdAKHTp<nL -ІL/jɴEA_0*>M;ie]\Oi iѾeD"usjTyc/5T(1)g9w d:tNDu<*t<*tN$HBN6pT%.(lJ(QJ*̈́r_ep啑a:kaŅp뢑h@LlYJHu&HR5TsM}̖F‚ ƃ R;A'U(ܹ3qnogС676+V+i$-ؠR1Kι@eD_J֌<UVO[ DRΈD hPhLMPo&cvֽBc[;N SFur(t +kUQX*#l>[2/!ژ6ictR^Kn,%VV??y$E[p)JbEKVUukTٷ+3u4jnb&cM9@-Gx_d ]RW\18QlRSJQ*yaJD&*)M"4F+7om6dO5lngھ&}xc?\(Ygkwew9m;CO hGj i#py, oCF82@}m%y# 4w+J{1%WkeJQor^ݟRi6_? m( h!^ _WVmű'ݜm\{|[љU劇:JyɆb I=u6ԃrl<ޖvu;!(fAE?kQJ(b@vc  :k9ִ˭9tζ[Y.j'l ZZ^G}\w+/6xDVaIP)p.ј ~ ڒR ľALE I'Gm*.Z%9a=w|!'36J((" B&Z-S5,1Z(!NHiI\p<7D5Z(Q;khɠ( KrÕs)cU .d }^KM[ylFys@^f4W?K8_Ep>g>I{>^* >N_ Т X/Bt VǼw)0СJ#@h)FΥ"?QD#ZD^ =r!JA%c@a4^ %0싪d MAf* LB wb3q<< >])4Z?LB<Ͳ婎4z˜v{;%Ի=][ճ"!/uт,c ų0-%M) O 6f\ʪ m9%` 9/|VɩѻbJ, k(OFU"9[߂YkҲsWi1Ţq{\1Dg,-Vn aiqdEALX(sF`CQHݹ8-P՜Y/>hryYΝٓȠױzU.f{}G&|^G _ϟWݫ:Xl܏"|KepjgeO%&=.̊/fN '_`7u d@3[* ݚūL^n y5-2-ѻl)v9߃e>cay>܈ bWlXQ(vj61iJzXvRO-au/ !-|\堧7lumO.Evo8^5TZ>*o4p|nqhu6,˫-Y%=Oٖ Zگ{.FcaƻbJ-uy;~9^~/֝PieB@'z-Z=˽{9uH E͆oIn B_\H߲QYVh2ΤҠ6=rWUwEҺ*JID+AUؾ*뼻Rqwݕf%?JmM*fů+[ m^$\pY\]._b:] M4#z8h&3B BNc?%Vn #TIZa⦫]wUJ;>Jsz䮪?z#i]wWUqW#<-0^_'_R9B8WAЩբzo;x Ftʡe@Y  ٨"1 z7x B$uqU/&Qki#py, oCF"O+%|$58 O2'vϣJ!u^zCk2BͿ]E5N΅|1,W~4V'@狋O$}v|ۣ_<'[ӫ1 ={(Joܾ1zc?(JiA*5Vݹ%J K_UO*UITqKBR*C&d@  Ư<"E\3 = 4%s'jE'iڷS[ Qt^[dP2;9%Ys#H\mv@mٲ;=ÖW#^'BSׁ'P_V zwۚL.C0w`N[Α''qop 3Rap—%H?Ho} )!e7e[ FӞeWi i+kVlq2Ib_@\3s;VRRu*tur7%%K<\lJ4>cleK+xAZN6*;MpJk!>*sEPW@(yA8DE`鏴rUZ36g<4[J=pŽ"Wm;rp3V~N2q֊[ģ7WV?/M#Hj@O11QVsxF-eĒ =bm`2,̋@kPz,xk+GE[=#Y40.]$ ybbm-yb_֥/(u>-橯n,VyμGd^0/)TvuTdJ],11hE1$"ECp5j`U*]Ȝ LE*1iX[I9ٜ<&t6/gGl ˋy޲cK~&@ӳ[I=&ff~fـfǜ^S/SO.s.<;{!r+i~j27Chw+O0AȐKm6FN9mݱO~{6=;v}m{o毃{ [ͼ6r;?L'-=x~rQw3-x.'11eer煛}Xb_ >Uj>?蟿Bth.6̶ͣvo~Q氜taZ\]Ћ0 `|R+h,Vf%=[۳5pllD0!r39!&id8k XoML3WS;PklUeUQFL!dcBt!*1(!M LFwv.ىc39?+_w(>mUFYoJ~;\6[vfUmX r ` *{J}lM1 T9 vUUBWzю1sXȺ(oYTfȀzĢDH1JI댕noRLΑRrQ* $>Ddtǘ1V|U*_\ } Bbl8T. +B]Xq,x ~#`Ps:2"D5nE!\aՓY@Eq5ga墌0DZ1 VOeV V *WBq :ealʥ-C`dmߖ`=Z,JzLC5`q#ØMXo]Ne.\/[!NP,9f`V].)M 4s>}6@جn}/Y.o n?~e[?跇Mz67T/O.& o^^#׿zyڨ^^ƦcR@\Ot':-#]/W&NR 6B*߽y1LG{k6eW8p;3Vqn熋8AT?vr.ήtP\͎QGآH+ RĚ g==->M"eUP-aTLF:(+Xq6Ev(z t@Oһi/rvC?L+ʝ=M׍|(:*|r+ 4iB;c{=+V=2;%p5[Jd UCL:JI2UJ JUDU9.PG MKbZ]LAuCa![2v#gDVada/x,eae[v$-nyteV^ŀ!Oq~6%s/-:Ye XĦ Al#(Vqj\bZш*2zEn Zs.Ejw㎧Jm,aڣCTUb.^;q^hN VX8"!㫞Tc=bTo:$5blm%JbN5Dm):Sc,t d\9jWZNimؾ6"mDK"M#@ѱ؛ycs嬌ʭAyc4^W[PhMV&Λ%zsޞR#b &b VRt*5%@& W6"*Ժ90p]Uc*t.{ t#g%%2q:9VOyyC‹*g狹r#tckU37z;5l Jpl%kBcZVt RT[[(Zѻgasrt[Aˁ/LҲ^תφZ;* L&ch?.glY)ZS˖);}M5e݈aS܋,gt1Ƭٸr8pyܝ*xW6`0+m(qnUK &KxӧFLRścߍ7zf˪N)cRۿ~@ iX~nZXJCA=yHN!sbYi/tnq'[stI/NRei<œLOڅ %]+Q\Q9(ƃJ[QJej5:ǾUX0i.B`k9!&1ѰKZ @rVTӥin`Sd\J'dkqi/gU(LnAtBwf^,zZxәe[Pn/)ś9G J#b@Hٕ*,J!_ =_y hH޼VՔ @URgbI5V?S\~'݀7rDfc2_J5S#ͪвcl 0(8ڼOܼ3rg-<5Z'g 8 AW9Z[uS0[LVudѣqƒ‚~D[v=eȋ |~}xD6l!UYyѩʗ*5UvFơ\aGGxcÆLJAG/y*:tgt1z=kpe0rh`{ܐhi'1@&ĵBo>w8we2s#vY9m" Ory*Kq&L<OgO?hȍyBzYjWCUj)_ .ewLp6Էȹ>/Ya!B"\ɽz;d؅ΔXeê"ܻ1N:lXJOKO8/N_yu{ՋZ=Y4\]O/f ^pǟr8r=T{'2J9uÉpѬ7"'K+_W{n*"B1jGX'TZkr`z>r$7]N}Tz|yA|c9k%p qj BT(gqM(hC΀o0!`^uf(.9 PkƋkRLRRzzg>?1M>nx-[_}T|[[H<ڜaC ,AR)]DT,xR;0'&<|PίgjbTmbb`&k/ ֑[ хĎqP¸#d(چrYsD&VO-MV05''F OxdEUk}8$Nc bGSƈ^(T!MВ@zV,ƴS1G(.c-(Vje#\HeH:k8Y (o"[57k\쟵E5.syo|u??7>DKyě8?Oo>5;R\ENcޟc yd~o.փA`i?i]o[GWcI}TwWȇIH0/}SFo! EO@bI|GUcpK,i1b GRc\2qrNΌK7s?r2"x~FmPcy%rX.fr9$7~ywm]Ai|U\&.8p4CKµB}]okקW5*d$%1I-rݡEeG:'.]è261^/7"gkew?x S%18ggs+ܮE[N(D} =F0+1mLI+h,S|2}YLj7C*^\뺱 NK}ZHX2?N`El>̢oШ=xxi;ѻomo߽7~;:qB3p 'H@؛_#Ax][]uM-t]z>[+xC^]iaX-I~7h/I"q_!gB ̛hm,p*V\Lrd=>h2ض&]uQ?&.T% +IhI!RP4)Jg:F/9 rL6ztqV%驭 (Dy& ,i0xi@ KŴfVϨgR%g:\*\=Lv9z[.NnywK4Z̠vmk.,~ꢂ6W)k:BktCDs!#2) ڏTA{^A[.(vl?hTqaD^Ig*r5&eWѶQ#GE#Tx>rR3BЙ{!UtG@T!QUe,vF6[#mG B+7[b)CB1 liapxRXO&iA Sä́q?Ff'Af,&tAfq Dk>Ңz4m}j ah`qƓ0LOm]֩3F+~4Wz{w\pHDл\#؆ Jpt9oË k]󫻹z2C*M}UrjrIr+o,7 ZerYͬRՍu171*uewe׉"h&рdFAS pjGjjrDcx*9MP.krʕįU{9ZcW\.E\jrqU(4GXv/˦a??**^8@ jb8 lB+~5XTI00"Őuq4P4KN9 S:okk-JEt2!dhtR Mĝ-TEO^G^pgZtF3y`aI2F'Ϭ!Ƙ kA Z՚&$a#QE @<,Ng"FNqm`O 0Aւ#TCH|'/6w||&JIV"&MJYEt+, 7^F 2/S>s,;i$f6r":Ն LX&u"+S"ET2ˎ3r.طTr½W=pȒhd,Drc8XGQG'(i`+ t -?Shi'lmHʱE2,LT϶BVU=I5yҷ.!_H_A~i E^%lOwiUʚԓۀ5!DBFdSZiߎ9q/cQ~WBn᜸SoK-͗Ta}IU { ;Jfb4N*#'F0(J:VIaR&pY1Ym5yzԩnk6rĀF\E >}`H9m`)j;m;#av.{dQf7pyxn)+74$|(B` 0-i@X`zzϜbjz@F37m]X0}O/'~XǞkuձo^HbQ-co0M fbXy6l3K_ A˃(qi9@yv {9'9 |Dlb,}2 ୍ BFspZhއ7\,Kj>$x'1@iPiޢ77dC{#Ǽ͞vg_},]v=ȭQo"ܚ/^6[.\cyJ|4~ޅ~t(*Vu`Q)t7xJAq\ uB VH*TNQ5[Q>tJ I()4G;ar\R ΓuXRK}Ns6:,0#:X3VPRĔFV"r) ޱ9+ۋ0|i^r+,ISKV [ZedNE\o4Fz>^%C9d5dwBqʮ /S`'iv"+B1'-7GI:QU,3hUƨ2>z;/*C"5BIK tY@-0"C&;Heq$ ~J*wZ<9Kde)K,D׮ I"MTsچ(B`y0;J癋uD_vlC:p {&2A L}p[+U' j]z%(a ?RG7'34ѐzW'£qZy(k_Фr#auw8mdiҐO6]=sUKy!pܪ#r" )lI .m^٤UuimcJ$SVV22^" *=4g7bLWn("9-Vs!yu<-jlS9:2ym`T2%%"D`U &łxHR`Mw~^۰ٺ9   zڈ.$Q2PBii+Sd$Lv!5@ʳ{)CTNւF\/YeT**{YR,y_k cww>Ds{s!fPjO:l)1u=.  J-m$ SJҳæLFmw 'M R:<+!!}LڳQ(Z'6gY?^teS .־6hrqY)ThY~/5=|~>\8ȲxALP$;sEV`RCVHӻ859'U~P ϝٳؠobEjRԿK^nWhAy]tB}F7a{ld1n{|frϕ\Fz){*!qN`lnzi7GK|T?SlȀwyX[mhPMGl<rDn7Ӥcvy 2X`a0Dփn3`'6MN߰MLuy1mP}W S].^R[曰[@XB6-0|K6~ ,Hnm?q}Z+is[=}scy{Dsw4zyx㚫cw ؚWjz4x13kVd&4VE w<%^;mҒB`':}-eQ]ow< (Ƣ&ÎI7`SR! ^;\R*aT_ךlRAHT6̿)5bԈ G]_X7ӍKMՖ6z|$R9ST3ڣ./}=ũbWq/}ȾRZ=}K+XIu1pUURJpU~{>zNp"Qe➀ nIkyeR|\} sApUWU\җW,- ;\U)\Wo!^\)=.bJ{`eR •dO?\FϫM*wͦW.m-Lfwx3y:Pp%{ }g]_a64l?c::*V}Z!p?>}};u 0]NJ@fi4Kio1L_Қ lO .hWUJWKZw1pUŋaWUZ2}*՛+-GgowW_\97v ޣB6Xd]L"vSNz AYc()rv~?M 1 eragY'f7RCt\7HK>qpZLaH:l1Z>RBPs}`աr柈i)x9Bje j%?r5i.d˼X_~9`B_#7D8MFZ|T.sepZmPy 4tiW"l%ȃ'ūJк\HT\lzX s9R$(t9R$8vyY"RrX.Iޙ`9IRٳ I<]4~~w{{.1;4Y< OW;j"LyhkSPs7s/oלFX;;&<rhsn/mz]eW^]jc[\yw+y/4?J']ם]=~YBs"h&З}f0mô A/< ؼk0_=ZCBҍ0fE!Cė;c,XQiѕ81:"u tHy??[7L=lwRid19 i9Ǎ\B;,F!. e!#(-t ġĔ^\siSAZRjk8OoՏM*&I4uG!_nJW>;<]]~'ys٧,dӣ=?o7TߚE'zS]Ǜx8T{;QXuFj:ƍNy^t "B'pG#.O_ XE-F{s |, QK500Am)EڥynQHTH(ePm!/b Zp^,*hٚlB&ڶp?A;8{4zH=sGUt۵j8]\,lΖ{N{LNy}n⭴WnwncY,LuP 2V&sP(2 3AnmJkv>'Q u@&*ebQ! QhQҺM3qvo0Bw+3xD-[.nzMIo=XoO2f1cgF#>/xn@Z)Q^v"ۑ2s|NPs1ə&gF΀Cespg909,' c]O&Δ426I 293_r5eߔ/FQ\ Ey|Fɘ()g"t2kT"|N|8b}onG77j"JrNӔA$ r~ŖwK/Zoi(p8MTpIxI/6d֕"}F6/)mj+† {GD,ISE<wM(2! ,ɉm8{NAzd)ʽqKA(cgLzU KQy$&Gq*QVS(]"l/i(- tC]dM-41Z8*DIG)1{9Dݢ9 RTN$ѳ4~Iy]M|RChೈIDd%J(ce}akozO%Ml.nY ϴh :Zn?(/iB;2D0>M!2GRD`2*DP3 ?Usź6t }>[@(GmbbZekҤw2\iNHks!Mvs 6 V>s:_cg)Ĭw+%p>mɄ{xꝉ9!?(ZjO Uwq|Wޙ50L9k ٫'ž ɲ _LN odc^ܜa7Xhb-?SO^N#_b#U+rMT:󅬟 Cq1(P& 5$L^ EN gl2fl7D m0) gZBسh X~Z=36ۮ׃˿P k wl 2D4u1 2)+i p:rQ Dk]hTMe{K'ڸ SqA*hJȸ$Ď{}L Z5 HuWՆ"^Nc*q,|<"5,F zU-ΘqJr2Z%r31JD6 B &-rJ^eXm8->sw!*h/X%͖Z͟pJQAlFXMS-Tn?rq@ ڡh>e`hcO^F&-AI00#{{~@of*e]xrf;/FEiz>:LIч˗/zݺD:2 }/--:qɼRN_, %PR*YJYr|%Q&sJ$2q(x{}4_ rzqj 11U3oE{`CY WQƷ6FYeMoތRoUY)c)Ȥ@8RhR%)EUAAЛWPR볼osad$+8\TZlMQn}xٷn0oz9w4)fS)=f4t@XM)^Y)6fd|:]4 һznmzK }԰#YְqE~]kjVnLVxt2P`Uw[DyM`¡ 3')j^g: ;vQt\@*X3Xj;zV3dYV!2gז&C#:11JiQ+X4B`UN&BJt>EFsYqU6J X,-X&sm`Tm8ہ}+4"G#G=AV?;}}Zԯ y%E@>>Ϻ`F^\yyX'*PXJPo_cR;G MA~" ,2G,)5rI=0^!Xf=_l ^$X_54Cwwb篯NY)v hNfpa_o["<1u!\B&;l{rؐʤjpp\g$9}ʓQwGr򜰐HOJ]pk8 U8]p{` /Nyh@9"0hr?1 3K򾱸->3"6;G9\g%Nv sƗdTQI1:FFĨX%FUuT&S O"ytd@0d%FqΚi%$ƘW,gKEUZx xmh\HS0:Ɯ a-+9Y5jCj١>5!¥5y[;Jei tJގwbjID2B]!{XƘ#3t59pm9$]8 J]&aMj 6[ʫ NrnS'EHFV 84C7 Eiy }p|uvqQǝd$v8 ;.W΁yk'>[UM(ir癖'eG~K뫓ׁُ)`oY>~9Y̭tsH4v]G=M?*z{RΆXfY_Iicfb韛9=^mx8ˇ{lsuYnzi#cMPn}=O<)f"Cf6ԐLf$L/NH@_/\D4W)S8g{~/s׎Z]S%]O#8}cz2; 7.W )?MvOC pU O/ћn/&项NِN9k:T4' O~0ft`D`顣R €QKo9ˆ2)_3FTbHEpЄx9ьPv>L(a$#4:o̕blKosBdtj6 ׁn.`,Ir6yĀteP1$ 9Rfx˫LϠiY$ Րȸ2 i̍KIX..Ѩdz֢Y*uNYt:&hprs9:a#2X<1%]'jO|v!3VWQPzZ p(`h۠j3)n 5,AVK5xQeoAyIsx/Z@&:kɕbQ9#2%F agZxȑ_l| 82 &XఁG"K^ɞq~CTN:@XMEvU! O"u:xc%oݑ~^KIE-C[\]jvgisCyr<ռ>MZD[(ZkwNƳ}ls.`s!h$ 2Zޫ;:6iͻlFj@Ky.\#Ǘŗ3澪}z#DkVF p$ҁD"hdFz *Y5׬dܫpA n)nt$F0͸p`-^0Yc)``  3 Tcu^ON,(_ cB?+˪guKQ*]w{x\bx.qĸt:m<~uRdPP/~8B"$} /_/M^>yg.3-Ő>&ӫ' mÏ?h]NqZ{K2qTk*|}Rf$(:ID=o u{mϋ^m}P[$)Hgk+ *wflÓmȫe2ԉV6,^~~U 4_.<<,_}L߹gsRq9$vǕh@8\NWlRyL4T攚9>6ԔP1lqij8T[ls!1%v  dJ*%Jx` Aҙu*aQ& 4A"Y҄PBF%c^n$FC Rc2^eDc a*yɊ7Vrjf%HHjXnp U:DQt;D]Yϲ%Iq##EDV @k&F5Rт1Āw>FΞ{1oa^Q +BgP)$$ϕYL 8hJ"-Ps BP!>GMsV~0*;Q{: n9B`eRrN<jՃZbIڼHv+Τ#Ψ2#}MDG}6A" *^pTDD'¢GAx/M#N H`)\2s"a1Zh$En"$ng!.7zѳlj,ͷ~Tc-} R:"ICf^4iTNY2P@  c=V`Z+-A/g`~t5i&g/'`\ $!''㢺[x毕PlMY6*Є`3/`$zzeݽⳳ`?M!2+]o$DGz(s&p1 0 i,V!ĺ܄(5@K:"%,-\T *&2QhҜxN%U"g+>} F2q)@i5FO =)4 P$2A)c"Mt COсs$x>hrV=KwY5(2GeAXmWс%ec,m4R`> >w? E{cVw3+$HM!u* +IqPIIːm b[<Z,Ktd* R%R7Rݍ'k$0m򁤝a%0d*s&-*,@olmB뽭nϊ*X5 }|9+^ 57rҐ6qL?^}kvѤBbv]e9t\y98ћ=/A)#J>Q[$Sb)uābLSkĒfxFWG gG8jʳO0 "#c< Ffr9WzEnuƓ6y|zR8|}Z?ܷBmѾWR㋳40$Qܫ ΁YMms8=QenBsZ̛#ښ~?y>x}yau`vR֭9y4<0_[vH4O?^!>>}CZ_)6DcmIG:48ef06mOe)> =sx6297*Q4ꦹZ ZF|\2&dKח3L򠈍c~y۩-;qxO$N˿^_ߞ pߜ{N? \`Ch M/#=;C͏{54Zgh9ł15aܻ3UfquH/=N gdZ^7?$xXW4H˯ٴA .hmvEUh/2&}{5V&Jo\IG yĐ1I& B9%Q\f9!DB^^Iֆ`R"tאFYahсCÜhg2%3|`/u\tO|%yJ&]U/i!W_jf3ӫDߜ;MUp]!TiSњ.dq~g3M8=ח鳝ȞsmEU.eeJف N@0H@>hmF'β!D:\놭zT,K &ƑKmra2B]H"dd OIV.w)m}ZD\-dx Jt._:iL~:> l7L Zo>^5˗tLK*XCڀ9FiTOe R-oh՞ 5ʍRL77($?RRk7*=dG=/Yq.J4<)H^fELE453D݊u8OL֯[Cf:4[ʕG+<.SH/!k,WR;AҊñGGO) N$$N=9Q#ۃ$d]f24;C)9E*=2zV{ikt?۠vڠ]oG33I=zŹLVGZi< Qq7A5[աl#VI6.[-`~G@5 ];/1+^sfˋ,gYiE0duҺ%#KB`D-g` $xee9e^h:Z`[/95% e`^'\`V5W2hi{q2Ҙ>draw!HvC3~5;%1GO2 EV[RI:մXf:*kdeݝz;/hf""cIKs N&l>;g+$ T>x_59g) v b! PѨ0@*Tks={l={Mʚ|ZOԛha`ag.CGܡQiלGΨH &ը\tpMZDGW2|}p)=@a_J-h`1LI)8JH{:gU chXS(\'P.}1›3l>n|pcGw=nH}ŦmYd)]85v+JXi$jtzTXݻ.göz.y~z'zd4ґ}ZP ?-T`֍?}Gjgq1uDtShS9{Hՙ50njjX5x"{O&+Yl,vnLN܏_X,߭%O^gxPoh9e"xgP\ %΂ru/=*1KQ=6 US^F I& g_:cgﱕqZ38`4ƅh,J퍵( =9fb RSNP0HFf(K9 Ҍ}cXx0~^O;Ⲙ_<=3bK+vd..UVY2FLME6dv\` g" C< A*YaSan$]Ht]=s]Sl_َrVsAfڱ/jƨmcO W$,9u>Pl>qOArLOrRn[hC>Cff^t6ԉiYH2Fv+fة}c`,U[vc~;aMΙ Lđoz+/ߍ׋G&݁z+ k@0w{#2wb<Q1WORm~RvkV|]Nr˓u^lÇ[9}Y4ڠAĈj\^?>|/# 떠 Jsok?hmyzQky?ꗟ}fY4fY޵ҪUz#&ؽdHC 5H#]VzDX|$ yrU`O*zyr,-s\K$eEf=+W Ҫ' O7gR㏳dr|eU^@է﯌Uoxu6t 毨1M͈rGR 2L+Y,H("d2!ؼij+„4D=I! E zҵ[/Dj]8;vH-%՟2Ieф.E]bT &B71T~9Fc&fQ;(@ *Z^ݢ9 +-9gj@%v5W&Vs1KM)E}J;IDEg%vL1``24ڌi`~7Cz]pr}&U[!< Z <A=T<8*KQONSY&|rn@ڣCM;~=1>4|q~q2_ݖv|M<>tW}zXaЄI;1 TB .aʵ]6sVYR%=Ԧ>Z@uM o#yj8Nsy4@}~'k~.yMHyhtyǿp4edC*װ̍[]o\ezVyM;Wy脿_] bݽgݪ g\`usWꝆ0[7{CYb4!:-ԁѦH3Plik}[˞|;ګuќwmh:zp}7@ Ŏq]Gm,H[1wltV` ) fE3u%xPE[NzX(_?왭b*W<u*EBq1H: ս&J^ BA g6 US^F I&wD5 ;g09Ϣ'"b]`+jX/B$]w)ȄIHV)dS*Hx0BSd4JJ%QTHmRqA[ zɀQLe)V/Rc,4 o%vF߬iqsܾS?2_$䒥J9&aц| VXm(#d V"SxLx8wk." 炈c_Dt 8  bIA\Y5_#붫)4oO{q! !YW JT )DM mIK$ 0,ƈL툸a]F ڋl1`ɀ%gm\`%:#֘L/6#Oz'QTȹ` 8km,2? UZfl'FQR̐6ң/X3Ǎ\xnd(H8?he@"Z ?~߮eIӞ}aWWn3uZf|/Z70=w=oyۃo{0&mYYor$}^W_~zn3ݫo û ֊^cZ:>` yL#>\zwI!R!yu5uL\K:}rGFXu?1yg`R1[#h%1g8بAU=gV@SG *F5 }Q0JS@:#=``EO3qv< ? >vb圼|i雫ŧ_e A׍н~xf%~Q$+J*y 4Ȕ-m] m=8%q΃ =zWBPldDRhg*Qαo&v ^˳;^^^M%}o־^t୉p}Effb(lbsG]Z rXEK*Jiv948~Ӥ[y;õ{W:P}i,()K { {R8~ZB=|+ @(˄2R%g%Y ±#Ɛ!w0|Q:enc|O\6LseEZ5|/Vd@s.,#AeVB d2ADK)"g$\RAt#xfwi=g<_A8!rB{PQZ HEzKQ očCVˠ=\k1HN$O>N ݟ)L':A%dzz.N2w "e:pd -:g\pƓ4oeaθ{5ކ-vp, 5CfevųWjM(ɴt@deK8, 0 *kwّso@rHt-#D0iƅk/ǨS&_3 B1Us\\]Rg *w {/;R\?kk;E jywH J+;9k7$&)zw((OpM; 9T@L<2d-'xeTd9KЂ9J0Ƽb9 m ,{kڔPJt!8ƜVrAyc%'cJ:[nZdh,F6_ƙVK/ NJtYwF*m2b[!SVH S$k}y@E)glCI*d$Id7(l0.) d83 );ώYkId%rZ;E[U:DQt;Dhe/Ќ!#EDV @k&F5(-H x';Ζ'|_|[ }^CʬYT "& s|v%B*RJ5 ,% |Z{'{ӜEԞaeXt9"nZA v`'G|GROz!<_!*3Dd*qdg$!IWPI ѫh==k{ލ8r%b 3h܁DXv<i7IѦ[myDhmWXXz0_g$r m#u|x{ #!ɘ4d&ELmʉ9S ~Ap``c ƪ8 wtE!r; t R]Q EHU@lD&q%;6Y_P-X7{=;;ny-FWt>Q¥ &Az P$2A)c"MVtܮki Z8wn͊]&Ea kQYVE-pUtt`JFVzǚpju~eyzk$I$!A8oJ1S1L[/EȜH]I~=jRiFRh\),&-CR@D 60!XHd)) Aa) -##x`M>34LeΤEcb:荃MpOhO5||_B ?NF_k?K*g'cxq+b~?a5I)t\97w~MQ)dHH0&]]AQORS# b6&Q|>RIq C8dh<%GX2h x~vLi[+]>2#md_,sM״oxZ_ i]Rsvii>ީx={ԯfZ //7?4Gq^4jOi]⿯HO#70/< }CVVYn'xL|~]2G<*bly_~:-,Oͣ8xGG?o;O>7\zuwqB;HJm |2^ԎVSKPL ]>u{+x2􍎩q\ܚ|;<4׫ģhMpq=M -|YLq<fzU?]SܮWS-B@-iK&ێ0('P_nnϲH q C: C$KA䔐DqC zK;V"MVa 04 WVGC/a Ajf s" L6ݚ6 V8sOd́1$$pQ"'$d]f2T;C)媔9E2zN{ i#22NvL.}SH뉐lH+MpriR{:TQ1 Ja3UUJ?@G U= U= U dlJ89'eQ:E]2S?Ԛ )yL,k&l* f-WLkw~8- ݧYu#8ڶU9N.w>#^W"0cO1&C 5/T|rל o+P+0F!9TGR^f!a^ER&^y dh g1*ۨMh$s&e,ASP\}+\d˜;Tb^v=S,e,:v%t\QuQvxY0Y Ug[{ ^j:v=_yU3L=f1tzvlA.r2\mG /wE/evN;]]xpnRϊzV ?dλo2yvzͶj~q4&p'ly|!#wzK5iulU %c.s>oC17-6"\Pq䧭[>kył&]SЮr€2dU@l󨖭QCjhR6 o\R :,Ϲlye,2+&:NZddIF(E1 ޱrv۫8^ʷ^qk(ISK "8zN\2ȭ HGKŹ,Kc̒U SMpJ,sD$-0^dmKQx%8HJ"2QY#ΐ.Ծ Ի9~A3B%/18s [dkm#G_l|?\fv3`e"mml+ٞx_[zئe9yıjVY"끌(!d/@_!6|^vS-59g)1Rr=KBbd0TdBIGP uul={]6ʚ|ZO3|@,wG/{:>N6&r74NR9*CbjxԼIdI5*2߀dD%&{|S1xw'֫H!.R ڻH JT*x$xTHsZeNyƍ;+3wu"θo ny kҍ^]Fy?ο|?zl@nɻ qq|R ʔA%6(ddHei~#%hzl*fdP+XV7ӟb֋RžR ;$]7LAD+)n@#X\鯖8U4NoXk %:rja㒴>#,H=I9 )!kcC" !WdEǢsDFDa}@fe$ɆP[Nh+Xm~<}eD="D!&& KYygS /2.M;ojBV\ʡ8--B8c*Vl,hg!:V&#YBY,Wx.B^Kx뼰(.uV%@T~wb^Wq2^Ƚ2)ԳErx)~\L7TO84רdRp&l[Ocח|ơFI&m2LƨMR>:10(,MuVUY3X )'sV mUIآJAk&z&E8V6G],ɕ7h{o_Z޲[YowlTMŗ?P|4@P]BHEYPF 1f[Eyg) ;{6~h&k"z0V,Fٲ{mo߼}cUYz2,FROtQ|M7]ݛ_蓏a

+r- 4+8Dߔ 6 ƣt^';Z0*-8Ș' KdH8KJek" 1^! %@1JJ儬ReI~\x#)8CFR2b5q6L0^Aԏ{t2 qۇmfm7B7+ElQ>6ȝɒec, *:|āLLr Z!t42RjGTg˳=.ҋkVt\):_vEoYeI`,nY/FJQ.NwUװ#3S @$Oi1Έ F ۑ߫wΎ|J\ern{ \Ϝqy Xd2$K0甂,ZY`2b7.GfݙV{u)v-/"x2fyM_ˑ m1 982ȘNk.pK(6+KФ\Mz`x`@a&}$IJRKI90sT ME! nzg*#@~,ȧegt1%&֫nIbmf>K$ͬZ1h=,#IH=I.Y h1Ę8#@T={Y ==|sȓ$p!NEiK.2$S#) +NoT葧 qt 8_9^s0d !Meg5 -:*d<8M 4oe\մ~' _ҷzZ,MG P:Dj[vCX/ 5,!+ ^x-B3/$s(HU$Zr  0t)O"uk3ҧ坻 %G4q^8t$}؜g-ԦxሪyݑhDZwf{l|O:+@,,<Z?פ\܌f;cvuY 8+:Y-Ĵ,9D˻(שM{kY*-7rxIMsJ >Iw]>|j32CWwޯ;):6&OK[<6Qŋ,1š$2Ge*lWAe畎dpnI#R EHFVNP}l'M4J]*u iA!vZ@YLI ;^xAnvL`>OQ엦m-M(JƗR%Pdτ bR-*avwѰJ遊?׉ȴblPұTLڤBDHhz⒂B덷]l@_5_4oMgv,Y1} tVyoK*ipFHoKqJl{$'MjCL}4im i j;2ʆ ݏq);V,tNrozFx}36 KAѬ!m#u+/i܆ݩFgE|m\jt98 --n~- }vr{bٻ7jE];xI9G|*&Ob4 yP`cۏwoj݊l8.~˟㧿l7W𚏼߻øXr[Gpg =(-W&YB \pؤ']јtghmv!U<B\ YڃL$/;@=5>&d#-MHΓ=H4G2x B)ˑ'3˘L4 6Ҷ7״.n!,eK0H͌a^@rO4aSX&f3*g:;dTmN L%t*Y9ܹ408 /i< Y !2G` ̕b5q6.E1qALמ3vy .-UOj"5|jz U*)=J.1놠7dφ+֞]O}c;DXR$=K9uTٻ6$W} 6#Kp"ޢ_m%JKRv|{z(CR(m:p$s=5UO=S]]l:0xdXyMy@1 I0p# J*D)\b b؊/0hơș[Mapfc:FQ%fXCr`;L2ǽڠe$F*BdςD6$mT>X_qI,T}!f4wρz~fgJhu7Y=LASq.fYYh,]A0UBt̋;RV]ct/PE0oy!|BP%SaDdxfd!61WXAIc8t3I xHg &Rp;+b1aģ{nAhmí@\Kbb|ȞƷi[n!_^M>Ae @}zBM&|&|(kK0uTls^u/n^7XL躽n&igG 7]qQK\@tm}g]z=zw^LY-|b4ʆvRge:iDqچ4 pq&Epל-ZY=+o=*}O66Y-vUAK]g$LLG/)"H"XZe4g<11"掠GF!Y\f"C6$[ %l)R,V'7|aHt0qvp06)U+׆O|* K7O%j$&elDoڵ9c0LY$(c 1T~I -ybk2FJN#K|IQP=ȌeFmس?m_pI-7qp6ZEZ~k~q,_kڏ4A{hi;\uˍ '7ytz+_CՕXhMfhW={x%1- Wh_oW=kU6jo/l/qarҨϋ-ݗhi=rVסx/.jݘV(U.RSLr5EkV(%c?R'*'B^ND~T:tDs2YY|֎J-ȳaQZTD%apR).T>JsbEPWBEբ'"B0(ц2A}AC*,B/Յ kpӦt+zg-9j}[.8R!)&H1rJ9b.~4o2ڒ44li<=VkՑ^>TN/: z^mȚyYJXBNz̲ќ%]@QtYK=s֪lqR!8N(,%g %q_w;t90LKvw9l-ߓ#;>I~aZO/+'2.fm}w2{gK߇,_6Uحm얮^xgm¸HUȍMk6mNO!R9^}nк}ںZo9ntfōv Z6smݽMQϷw>ܡ畖aru9ϼ_nɅWt|-篟8eSoT/`}Ԙ6g|~1/|6st!jv7t,cs8{@%w9R02nm-ZD:AbSsBeM==q^PA:Y,o,e|ϛ fz6,֦w:lۿiڔz.c7jhmύ~0qX4Q{ZѾl_5Ӂ/MRL@W2-SЁ틺e*ϛ̩4k )BImNogU *k^,"A; }_[Oh=s9c;9Ad9IZ|ɿk' T{]}ҶdWWuWAV'"zG'8D]-sq-إ~EOʳ_6Ío -?'pO@0j!x=aQ;̚z[پw:0R( jJTpCDb.c,,5(j« dͽS?=53VMB$ݧ.m!ic4!46cu=C(JI8E,[Wbi=B-͡H<|Ի5z'jdc<$nzf6` BTa,G"ֆ1Ѱ*%}*Ś]1:i_0KUkwih! /TXz5!<)xPiD2+C`$ (g(Lt_X2aqd/ c/<o- d|IXDufP& :ˈwmm\YW?`Rѹ_ =Ѓ :KsؑI($E6IR'p$vkUNP-bbYь lbZ1$cbVE`Ӎ>Rɼ d}E M2f=bM=A%zt)}=2* Jxҗ9ttIgmD e@PJZ%l-ѧFk.IŽuq]Z (-f}m!Q1M(={Nw Z{6 @(CȚgFa茚$fj2RAJ+12!~R*qwq`DYUZU@`* BȲ$@3l,J؂NX)h^ iVQ[$V̬$޲[PRSҫ*^`^ZF7*mU(m W\M-JUK A`=A;w' Adh.Wo[*3Iw @z4qt*D6IҢǰ0I Nd2^]ۆfMEѻT$$QzhhƤ! -7lDŒؓUzI x آsE5* t#bs Uf1uAAvPXfR4Č*m )w!kƭzD-ecW,B|+Bq&SK \s{%ԇnLo("ZT1"&7cbwvPu`-иO +AUE :csjo6i]0s0XV=ؤ=ɗ Uw&dYhsOH:'/u~gP׮BT!D/l z;k*@`Hh G` 0Ҳ¦f@ D2)z/$O30%7aFF{9>X2m.G +Q` %׆&In5+6F\4rUf4D  c9\PHh$d.P,=vQ@Ւ0[&'yۈ' ]g358 N}`(ZE f-nͮԴQ鴇␚&T{m6ci"[u(pUCr!%^Sʦ|\&{ET" %Mi}LJ gsǣBAF@8% ]y$ݻi⇇#OVOYsmwYq/KK;]=K{v'wu>KkhnglqzѠd?m9쬹W>hmQ'7tD7 ޸o\F]Kd*9f6'kW듿5P< 9p46)0 AHw N<>'8F~d-xU.[ >)"xuH ^d">NZ{աm|gTZ|τ.HI;O$ y⳸7ѻ8w38Ze]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]Fe]F^Hے#p$ 9Ѹ Z' KoXI7J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%ЋU)0`H QoF Dp,J ?y%EY "@Zۅub%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%jKǤ x@͞@mO^ =0+^[aV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^ӥ޺ŎɫԔzm/_-wiΗOzz%1 Qpɂ7.!1Kր1p%>zm*+ CJW~}Ph͓q^drWhhޜl#ʳƘ}}]t2'¾'&?}jy(eoB^S[aTɸ-}__M7*d5~]̽1)&~o1@L`' ϓՆ |vNW?n~5,@L/ {{l[qoN)ԤO7{/N c6XZBKpuyqOܦr+kYic92O+uVriqrѩ~xe{38thlsGDFeƒzYȽSW\x>_ic^߭yr~}wӐyt$m҇%=҇GS(\u45ᩗ>F.}/-;kuKdkG%65tغ-VIj͝0ߵ͢I]4ٓLq0!wY ;x2ì.:ُ3'L[4sLJ+< qXBWys؅Ynk׻6~w"䟓"f듿̟>Mbr`[O'?.NdNcBnd>Z:j9e%DೈN'OF\&OToٺۯGꗕ2R M˜ ~mQGҲ__!O~5_+ 'f$Qʋ^Ot_W&rgѵ;l}u=|p71/n['$Hr<]ZPjҲr>y Ohy+[W,VJ=c!]L4ԫjDf@SMMEoĆ$m&~})%,ӹU8#5%ݨa hܞf3slA@z>{K}EOC}DtX]MM﹟{|]^Nj?c d"Ћw]L3NKXi\VU!j- km8xfWgSÙe=ي) E'4iS& H=ZřgCp=soWI⩤ DRV8ʙgTU9Ͳ?Yԣ9u y{z,|D5 ,2 dߍ]TnNwhZd:P^'Ο{kpo/ ,eRm12o J徕œˣGwnN6 ?sg=EQ-vuJX㧖De!r56IoG#ü]]_<0m8G]Lq_nlh'U.}[^W]zM\~:'A]xUkdr4n=TƥQnh38kg#ZF7 \LKzZIkw6GPc>LigVB G*f/+wIU1 cppxI\ ZsOV;&_)yR-o~\rb5%SWU*x8Ui t_Rff6]0wAcVwй3ۇ|7/:gv9o|~4<trqEg>sKԞjɔLy)A"Ȭ/j/EX;XS IdmL}^8cVZv_kvx}&Q]Z.JMEԪfY^cEfI{rOyc¹};{.{Ko|/Y]{r ."+ArbCw }ڻtxW|ü+d.q0&)E1DUTdjey }XGŘJPPZ:B~Y;HG*xLjM*aҧ QkruKvz}-ޕ:뫳]sΦߺe |CAVy-b]G9%ZI}B0j۸k1K琑RZŘdj$$[RH&3( %RET4 -gN lv+6z%GDhXWbV bJpYZ_JKF&4D#)6<,sdTZzǂvܳqbdhPcJ8hc( cdr9[Y#W l~s BO K[f(XU= JA$!y>κdRHQESbhEKLQo>GMsV„EԞaeXt9"ϮZA8vRo'-wxC X*3Dd*qdg$/!iG'ct;|7#,5;% QLX7=>9\Zn&FE%\ @hh)' Er d>f/"+WmJ EmAK1 tËI9`zkٞYeiR֠VaUW^epJG0,$h.`||/RylG"RY m?~& ySڐgYogx9DB"`Q엦Z$ơC%&-CR$@ 60!Xh].1ZT &H%:m̌  .>?H`Z1i;CJ`TLZT>&q #X 8[{BNq2Yf >~*|ə8~-,C&i?ŧ&+] ${ *2QW(ћ̏SZfNā`L B_ ajYPSt6(> -:@@qPK&5bx'< ëɔ17^W1  m#u+ 7]Ѽ qsotr\8GB>b6 Si.'|Dk{=o_g˗j40$Aܪ 9xsikUM(LUۼEk{ms㻋م7.1h[spp[v9Rƣ/![LGc+FV#F2refyOBp4n`ŲGëDOkգ ~CuճZ Z-F82&q|Kƹ*jj5w;SC<]Gb?::o(?w~{{7h#- H`~}o~zhUWCx%m.z6[+xbܻXrCߍI,O[͎Gh^5"U(Ui".GuPmj6TԨdGVf%OM0(;Po_n/ަ*l$pd@ '@ .InR %g`N 9AYNH!84Ѩ׮0Wy{xd<ՐFYahzCÜhw2%KuP>L{:Ukg<񕄽Ctn=t ֹVa;CKX EVgm|4i PySզhLj(xP7VWd-H-ŏ:[`z"<ʰAyuX8:l3Ȋ5fCIMF&TB&`ȏ<+ٻ^G9`/"'9q[J@@uNeC /Lra ,ЭNm[pDd4r!q],& Nޅ)DD7NRPml>eQ͈Ksː_ޡSY 䕾Y̬f}UΖALo^?y ֲ뚝^ a&Ó.*3GkֆZ5(ȵ)c=Lw;xGUD)9`r@p,L 5%krҢƨlK9F2΂ ! >Ap\)"\9x'\ZrkUծu Q4!+6Gr؟_-@dVԩq.hj2Ӻ1F 2*}iʓts~D{["ג6)ATw]z"c.ou=j]SKm{dC6SGͦCjYSzƭwnz|xs3=/\wےϼ ϏGykq Gܪd8o=8ktx5)2FSOiCS-ermg͝xM 6ֆlbuI7zz~6]LVBZIjg#PR\ޏhwoZyr0X]bG?o~X4h} iyUftwa#K#S!c{{_wѧr"r~_x;>H7Un[RnHn5[0j_ԆKhRCTܬ1Eo_5攀 '.ʵRL(RRK;mk2Ѓv dݻed%BITBHf@gSbF g<1mG8L)9!6w_=6YN—\O{,u.OsmI7*n\{tLHHz)ֿ''ȗ* YYMOj!r-s>(T#ezNA>>l׻ni=ҒZiѤ=V I.Bo4*&YC`X+ vƻ:òJZ{HHjCoޅު= U{z!'ٔBq*5 O 1ˢ0團N we˜NpRkNQ:eC]VdR[;Y`KZ"k,Yt댜~8 5Ncݷ7vr8G9&G[_ >zLS9q]秭gQ$njAhW;a 2s* g;UyTpҊ}[hR6 \X,X. Hx9w-/WEf%!/O[' XR$DFtUFٌy)A*z $M-Y58-:9:2dqkdђxq2Ҙ>dwB=]3~5Kx?i$Y[R^:nGIv$U,352 l:9=v_D$"cIKZ" &u֓w)YG)cIJA;Iyd1hTS Jdy *9=c[;G~ kߴoEA;܁Fri-^dx[F4σ1f~}Φm9pz,#JfrYJDxr;*A%࿧%3J|/͔kا9ɃϤN?,Q-S9R!Mnt7x8<֥+q'~qs|&%Q-2ugޒYHZuE?kr+ ܢBFh,g!ĞwJ:l,mŬ̙ :5/%"*X!P bG"gyun<{/ v';]1]ԻK&iٟ0˞ia"ׁhu MLY$ TdE):˼*Bр)X3)K.H`,el.9+o1nE 3)5cgl<.3vՅc]hz]FE:i'W8Ku j?//[h$cJ`vbR%2$J1Bu o#A&p^Y4ϓb!)ڔF!l$\=. PR'NvEfƣd\;vб9I}-H\ F̃WaGl<1/P q"9lR~}ͣHV%YU_&8`xǖŀӅ7B45$b* `zKrLm5b H23CيU:1##`ج,Eb:a3qùQ_OǵdDl6?1"DPd&h.@yn[۴qq&KږG)umlX'UrJJQg66ScdKSIkn2Ĺ;qq'_gYp\\lB3\\'\pqGK+!)rꮂM.&AJ 4!GÜ(\|\썿x,xa 6w7DoNwJiCU;GoHvDߣC7m͞p)|uz_$.W>l0i~|ü1e AY W`} tڃ ɀ(X\BnBV*7(o]2 zy7\z7#ܙ ]~?a~%X`#v' ~W\[,ﮭ*\[oѵ]?;?>9ݬ G Gs :ztQ 3Z[ N#_ >Ћ8a6Y{'1Y w.V=`yC))hEc7N㡌 Q(Ɛ+-XM%I/p UZqĹێB=2薓[z݌ggo^fw.OilpnݾXjW}yX..еS`mGG Iwj|RUK=Р7WF!1kҞ!FSր$B2?(#cpXvMEJPȦ 4JjQӦ\j6CQy BА,1 C(7fĹgS c+}"^ON_BܼvKnzBW; g6k~zNʆvV2M\+<__|%R1e92Vw_Ž,BgY42B!;ڑ*{ҟtztO^ӗ~fi)$[ò#,\  %H)RR.  R[gRH# hLۣ)v-E+8.n}tW\Y2, J Yj[Ɉ`Ɠ&BCLXE9< ³ Bi< ,*UYe$+G*i]dc4X0 O(р#Ά|Z>/3fnU=~;n1$r9S|Bb(yrEEd*=KN/9-AIbSQJxgi3 7=L=8Q\ H\0Z;7j\Z6|fWQ7sr4z,[ߚot%^n'80_~?: os-8dYӥ9QZx?FlA]|f5ýޓȆ@upB*{'bbf|LG_fg {wkhEgUdBd _+6⦯S9Πa`0N؍ìO)Os|'Xb]ŧ=s|<>rsNnQW7iM }1ꬺJG_v}>ˣiUؗ{r4]9Nndqy'<pwCw߿׻W?8ˑM"~{ w{h/uy }Ɩw|d[0MrxH_nb xԍU-q%\'iVb/ƌgb._KtR4ubv#|0 O7ePwy՚j]edo }6z 1 s >)iSZ%S_DI[#lT\0~^ٰn=ѽ`#Xځ#3K0)#(cv=Y)Zu QمaOɞN͉God Svvz=lMװ[h:a;֢v-;Efirw |s*~&VJ(PߟzZEa2}4C.jdpDeõYS߁>zcf9{p ;5^Z"2f= ]4Pmgj걼Z& {+`0ŭ#sÂnu>\?/M;?.,'jywpY̯YCw~4\7U7(YWP/oÜdb{OorYk1WB\S:2V ĩPy @gC>]}`d/_} GV>{]4q(hs%hIp',B >J@X٪":k֔z.hCL$]T'  F8N–h18;1#X) aKI>7ǃԧg~v,9 ƵDZA{jj;h4>t ?|[wQ9A>/\w#׃OCY?{v7sxr=p{=b1ŵ荛o,[o4>:w_){3oh[kͶ״Ni6״x$/F9oE*UL\Ҏ sTIJ U#ᭋGţB+҉8rDIfg4r@% $FL""zWR@,zDžsa=76p7 L$@Nz⩣FksN夐J*SB<~ ]a6%Aۖ3Y_C.g@·cX&|*TLOj4UltQB:Ӝrh*TSeýGKɥ#%N-R/E+ļvQ >rh VfsQ!%MȢRBR^Ĕ-Wb1J&ijb (w[ %QMnp2sV/1ewAϧ7u >avjӛkf2dM)r:\y/m黙*}ԟ3+m]O.uMxW[rڑYuFקm~+l]?ۻ9=[f!eݻx/2wyvyN&[[#;]d~2N:6}`צh^eިDs9e]vsG:y  v&'͵RLf$r|z9*l;W%Y(ךY [^Y:HU{Uyr4.pqw@4*Y & T(^W`Tw61[MJ`ȡS- l$>0YQb M(΂ށC_c-Zc'N^ԝJt?mju?|8;.L^ Whm{^R>/ Kg(ep<l2Kg:+QsjRH.Y"#*RHxb*'y`NE{<,, VRDlsZRj"ȝ "锨ăCB%R& E6'ۼ']4j~u~[TmGeLOcTE\k҉Ҍ/> y]L٨6*&'{ZJJ-)ʅ?=V2V~CFi+Z)C^{-THƱSbTrK*LHRxvVW5#Vgq D ~NO ~ݫt2o|#gDEih>猾Fɚ7vW(}0&e ir5Q@,N`2k2.yw\{}`n`nS P_C4(y5}^^oZ/V3_zvXI/ГJ`?HbU/e$vd_K%gA9|\^"W3˝e8kdelHCۦ:p~ȡ'O43eݒ(i *Ц]!`ڡ Wˮe%_DHv%;tvB_et EO.9M!tg*+t xOW/4SD;gW.t2ZINWϮ^"]NV?z-EW4] UFٟ]=#R;NzM8.O,Wih͉'Teڕz]zʥ]+ ]eBW"mО^ ]1 Utg ݓ]eL25׻^]q`D  WR^"]k - IBϩ}P=J< 52_V_K~^{i=wjBB 8@ NבP5g(S*;%X2ZHLRCPz4˔a"ԸmJh O (u:&'AZe2 bԚ 4|h-l .Q GW\,r25=lƪbͅ*}+U3ǑF_Puyr" <݋w),Cv̹hd9h*CO)J HR]+D:JD rSNj"WMTQР M][o#+`N6yX$A ]`sn0`űwdɑ8[l],jIi[$4ͮUXUL֕ rD";ה#cğ2 D"HmN#1DIXf:*,@g̮bD6kJ4KbԚd)N3g!rn*sEB։@ʿSkU+MI1կFǤc)Ry>$ hT %E<(M0={ ][P]Vuĕ„^)ma[A}ɰm%#K=([F|J'{ɳ)~%$nt>o#mgkOWӞ~ vSpbyW_/fQ-l_{we ]L>gLpM4j0J*c3I&('ԝ\\JQ{ƍ;+sd}]D\MEZWs\KԢ<`8s fԮ,_iMHo= 6Mr}W?7}: –)q8maR_k -Zg]]X|4,Ua2uIw~yf:{9lw \o\Sv"W"-+Q=֏3բzz{@ Yl .ۆ 6Y=ϖGRoi/K=Öc9nB 8*s9çP"%0n'>EmP1#:U,sGLJ!mD4y ^ L=jӯÝ.?鴍Ú=Fb\>F4Zoh~KÞӍHC5"_9(nQG!#K4_1Y>PdQ#X,Wɒ6G`UznHި廄ADQ)RK7XXYQ gaLX[2d1&{'+y`D^^u9]fEhâ$6n].R7wXGʻ7R ClZ&,JdI5KG)zBP9 8䂠N۔=>rVb2fXm8{U*d2  W#*6.t[7M4Άɟ7hp8:ξqZ r=O)8i$!zN`AȌ"5(ɬh$46P2{,GM S\eĮ6݈ QZ31Iǡ*Qg$ޒ%|ªL曗!jp>㒴>v,H #zLU0kcC" !WdEǢsDFEG0> 2khCՆ[M.`<D&"Gr#Dd fjɶ/'.MQԄ8̡WED8c*Q+}6dg At V,i X*#bFįHBYMJQ {\Rp$whi Fit&٘!(&`ĉs;kIǡx*< V s65]#wz6xjyEG\Jя`bYkz7YeE;b5\Unbt!- >x3%ɼ]Li$HrlE>D},!0#sZ-y x95т@C6&2iлKȱq~D5&>8rDRF .2&$7xt!8ΒRsYȲ.)}pWeB %]TDVKxQe \x#DQ2b왐c!6W7D.4ZO&_x6n;]Ԅ͚c%]ٗs~˒'rzhA\21ji9ٔ`;tUݒ, Rsn ?d^@ZEM֒+J1 % ҿN\kF"{)ةN0`r!t}ˏmx]~,O'9.M#ME=my9QGmެ$qE8$ ^z9^%ӟllVt` Ȝ! nwdo]qFk`<.V41lc؄d|̑f!#\*K([ۻ_P];_T42{o5zjxAPox|^Mƥ4^j=T6W`ه=u;eFo}t) enZ[湶q>|挳x > ")"HyF RhUNd)clPѳl1L$ԏln-mH|nLa]ϗMJQ%=zTJ\@9dF'5Lk.6ͨPrBF&="< "Y^q =00$a>HVl'Pj9)G*UWν+gcv/m@MH!$`3teseJL>VղFu+A-w81",n2,d1h=,#AeF.Y hc)ECRSxƓyꡧzz%p!rB! %n:(p>;]epEu));&hjxѾڤwc93B)`nhI87lx%kt r=q ,o\ߑ7l@W2֗C̼]Y0zUL?ԱdBr}g "F %/S b63хdǝ= $"oc+3A{*9'x4h2 Tv1B ZJXuOR;xD.KRY8% 8B{|xN%UyOpVhzD[ \9ȴZs\8N4$bWzeoQ˥]/w\;^״謺묃Ne& OLly1DdSR&;w n&ʔl//Eޓ핢Z樬VE*;˹%@FVݻG1c+P$m6qu񏄏$!ɪ곐g`E`JZ Ky/0;&j*?~iziERha{?$SȂFLiY(q;d X2/axs2^7kaIUx!$Ffkp9Wz5tCv?ѧEp4Y7qHfk:7ЂHۢ}m/Zעt{b'rjE\;z qMwmmHy]`N%@cћlmM˒Gx߷xnHV,>֡:dU" 6QFuɂ+h=gٺ=y>bqM`v\Ǯ9s_4~-p Ɠo5#/2uζ ö:&G0 g0b'rg1gtuFZ)Vβ2>LCbIdqB_jS<[p=GdPy7?ӯoOx2s?_8 $/`$0^ЖRCxˡ mJbђeܛczYYmq/?YwG>hYh3zq$)GXtRTYAp[bb#A{Ƕm?z՚i]E_e#ѣHzgA ʹ'Nh.{.R24hBKIBdN3^ŗ{Ɨ0[6dWol>bWyts vڎvZtH}zyҮMp͹5JߡAT`ajS(x4Pq;wT^I?-DD+"&`DKwJFJB@8*Z`㣢`&l#Q>Ҝq:|D=2:#Z 2@ R甕@T<'sAa l5dx y/dA}|&>|>Kݠ*iAS#Uɹ i< Z5mY0Ƽ]19s /"oexu b gA bN J]L0rxVRP.Zɵ(RvB xY8PzpgIU฽m>MjZVo^n^ﺏoߺ]O~]|,jtp>/Y|_~tc}AB7|'/ȍtEtip?[SG(p;B0JB "I]ci/Pgc6IMR#`^$ZB%GHD Z+A0B$%F.[h2Z A!D!-!1@XJƴҜV>-)Ep֡}:cEq O&fۭrKylhQFDaFȶ:h ajS:x`!*$eR9k;V,AFjAJKSmP!מD&%Hr\3M(22P#pP KYf@kK %iO@<^XΊR ? ~)=F pk4O"1xB訰\:o\F}4Dq>**%Eӷ E~d'`n"Jv$؄T&[V9j'U;ڭEu %ćA{@$ DD $<U:Ȫ(8zuO|#wQCMFR+F "74Q Da5u* t V#(z+"c·xhϹE%" !JDž _ZG"D7b9vQHMcM_F:Jq}idQfGؤi~-ݸG%e,5OHM)?S)\; B:c5^ )LF[{ Bl#p%LS,SCILT,Lb+vm]cEGf]8t8+ehA%͗hY!4sgb\ĥ*2/22Ǫ<յneշSr2*^Oצ&\(Zݬ~:wQluAvm@aD^' >*|id &drnɸO ٛ/( e+*eNܨDbow($ekuOrR_CAS#D K7ц BlwKv۳)ӏ}FWK7AK%8R][vJ[y;B0N7F8Ww=;e~Ă8GߣU-ӗ95;#'>wtb޿sk\>\dKNf9t2Ҝ!uO3{d࿐oOl`sk{NWm9mز͂[vyayn~N&;[>~}x~ANW8:8 gCwk._tۻ]ޙ^yȃl9Ճ7bB;䳛M˭<ҡ UU[QxC &B2G |̧q85Ā5X-EaNeU7p w("57pwE0c4lJ H6 *B *B TR3Jzf~2%=3J?Z^3SiY- KzXͲXMefYYkŚefYoZ,VdT,,e^,,5bEZG@ZV1v_̇8iݏG?ZY$dLVn0σª%XY3G9f#kȚ9f#+l?lĤTT?kly>dqA:? شVqf/g#՟GgI \PJvCN7C5i)Z1 y;h'QO{}4e,:0)-x! Qۼoo@"z5xӲ^{kt;{Yvwvc VHUGU㣦j>j>j>zOˉ74G;ux &9vZPmh äm,S2v~L˖~[Ͳ?hfy5չ_He)jtdZ`QIQ 3c2 9"&%W$%hW\[/, !HN'F(ɢ"bmlZvhLFp`{dzsR"$S2 ߞdD|6}88?ZuǢ]B{*RüU6h kµDh/JE"+UTVKq2F"WMpolg<TD5A)ZEaY)FΞG39B,?jȔcI棪TaQb8xZTDdZq/..M|CfB!eP̒c$*ʯ6>P\T ] cpw C$KAtIF /*-W-c4qQdVR<:PΩ@H'\*ф,jQHr9O01dYۆo3(EbQGM1 T%nMjZR6^z$&y No2֓8-JO!ԁZZV)e< ,mPSR$*ɭ5XcMĚo t{ҏ{ҕ}c}7W%u1m{=ԓ7KNLȖ\T2Dfj%{L"5C3iL9Ŏ9ܨϬ?}{KTQm FKgn3%UXWKYŖt,ާؒ~Ėܟ%1 &F)-H9IDFc "SZ[R:d& )OOp<+9j@"ΗFf'< yQѐINp+Fin+XB A&wIo 4($q45P:m)rT0m RVGmn+BrNTruv޺xv׃r]_)Ʉ{@9Z{ 1("2S4Rr b8EPQbLޓ|@S F꘬ϹII8 QQRPI֌n1RNB]X /22>ទYXd^}&GL 8ZN Vf):ãD+8hK3S!#"- 8f.=|)ȒW=_Rdǒm%]XELlr*bKYԽ'`!) lJcQo$\r\ɮ Sv6Z܎a:Iks_PvPGރݦsY 27'} 㒬rJ O'QȄ1 ^6+brE,:#׉(#P@$HFD+&T2VvmAp}3k[nύ/exӿC>4  0B/ o g¶u \@r!DJ!2,1LNѿh0za9&VeibJK< .aSM]Gt*\{r!Y1d 刽h|.\|9{i)(GΑ/ r `u&&pi{b= !a-={0xW+eWoo{~Lo)ӶH1 5 &YUtQ& FSJIɂr=yXOj6>Y}VUL<ax)iז& ^tH)6h"K&C ]o0uڧ9|7i#m}fNyz[JQi_N,~霠?7OL'xL@G>{@>j̜DT7)yQZۆfxX'O 'zbT*yg"cBrSƣ=clA~f(ΛͩˌN76})[.,"+[ôϵ+eDY_X/KFL{DxD(ܩq =0<0$a)}Xl%Pj9G*Uy )ؘ!:(SDNB =܌ !tL.Mu<=*m(fRZl r ]m]X\'rX`>b&jr3i{&"Vk8Mtv[/K0x6]?Oޘ`xK1 wW$ >MOOK?/pt<`?lz2-\qMפ[ltOaH>bEA`A޵/Z">Ks4:;>qzpse8MGع]TN%g vwԦ@_SwPL?<4&..oh~ <оȨ́9?|O~ }vܯ3v+Ni,IW6ʷtar'g~gNt/sG+~{Kw̹ ,/\ۿ쪺OɻмnO)$NhLeL Rpn w|S5Hrmt/IΗ$Kb{wUʻ[U*`N%{ 0#Crr;g Xc+MilрJqxmJV(%cN+P9K񊠼X;h5qn. Ѧk&;@_F$FUk+_y44YMJE7t_.ݭq Gi˛;*⪣K#VuH:Iłd Im&3 %#e"O1BYJrYv6ޖRV"!"!wcY$u)9Tl9Gzmƫ׋hW;6>(^{$ [ԣk7Yhj@涋9bWe`=][9M:\]oZb21h4Y`Wel2_k,bHg<  )qdx0 m\̺t6 \ :/^0c%VgëV^7Yd٨CC^.7/gJJR<4TΙ8Ĝb[.TD0 {.6!JMtZ,Iei Rj&&LHK6%tv7w3=nx-Wz4k]KsFN O@&2}^DV>) ]Eݖbc>צe5ʻ,Mj 6[ʫ Nrnį!XU{!}U(R#Ͽ>t4$!A8oJ16TV!L;b.tf`"vWgQ$SSMMw1$ơC%&-CRd@ a1!i,Ѽ]b40HLJ0K7HδbB;C̙|L̃㞓R&yBh}M %]AY >~,l~̫׃Ƴ*6 [nKHkCl$&/Wߚ7.%]oXOXuY%ۼ~M~A켞yH@0&]{@p>d1.w|ХD1ǸdYC3~>Ǔ0<=W-aiU.pmdnsWKQS߂Hۢ}mou(tQÝd 8wjE\zc>pvٍ*&ep噺փE@KT?-o/|w~v}v1Ru+aNoIc+][Lߒw7 w27It6bm,3o,hN->Q,t2\=G'-g ˇtMj5hnǥkRLgt}>4 l LN-lԐNdqyGǤ 8:|ʟ;xݻ\?=;q@#p&M"z?=L]Ks#9r+ _51k;|1CҶE˿މ")Qj$%LLwK,J$2{?~޷n[Z9֮娗|Jh9 yx}ƌy]r ԟ'gXk[XE*\ndjg$rjXOⲋQ|(GRi4!NY vd3>y 4߂oztM#{>ib L%'|R`R"I$k=`()skG>h w7V&oq+0(Z&Ғhu4x=DJ c]o2R=IDadN3IE7Ӊ7G>t-|V(׉2R.A=U0IyDY1U`#&+fKVLԳb^w#NbGcbFM\U)+edX`Gchr8*]+ߕHX`O06{Y x;:QD&DZ}_w͢i~^~GG)? rV{|Gq9X~2Do5bX+ 2r:YSp:ߜg#e<|" x~yW &Yɜ灙;lsi@ a'5˥Nn+myN8M4JXEAR4`f!RDORYX+Zok-D8\_]'i,85`int:ˀQ4H-)$ݓ޲-?]49o_OVC$ⷚJ[CtNĠ`g*q=u:z\Gq=I\/lԺZ ԺZSFuj>s;ngZSvjN۩u;nԺOԺZSvjNg[ԺZSvjN۩u{^&V8c(͵ h\Y\/&+J OsRN\5XklԺZSvjN ;nԺZԺZSvjN۩u;nԺZSvjN۩u;nԺLSvjNۣ _0P+yί~r >{QG a#cԜ.RoQX/R`z}[2}!+ALr?,$ïXK` g2gz'/फ़,2:FY)X('ȲHI1XlԦlml!gt‘s"E)"d)y[@P*xMm /g 6`914mřj+m͟b^Wo^_ߣ۴ &o 3"i쳂2*חW7cU}*ڼW'U@h{׵OnboS1p[㯭Ewtzfw]ͭOs[rXjn{?|xw˂!{r>\_^nouu{(ݘY/;:N!^dW6㾣uxᯚxf%9nϗ֡[_7`|@O!ۈylpA6 ңjzTΨ9;f.ӷ# ٰvHZJ GT! h*VqP!Jk+V) BCN>;ݪݺCqq+9sx 21+p4΍GCNiV0d`Q4H-)$]GIo*)g:|6C:MPkUhrT:1N?CVsXTryTԟ5 lobפ{yx1l/:cPuZ>sBH h.X ̇_ " X"i) UBqrHzBH^0ABFYJ#ѢEH&EVj}H(t`u 6i'x=0'ՕW lQ(?/ʘ,Id]Y\=c<#(>|﬋SSAF'J+ DLlJ=Oǽ[a $jMғkM&JU _z3YUM6AP:dmS +4R(^>"h8[R*DxRQ 6O2\)lN* Wrel&n4P[hBm{7_*2H{h0o7nF{N S N%,5Fl=g2`2!x"7!k[ɩ0("d5JE`IlaN8_$)uEVt9Kk.Tv38jVu6 _X}*D-J POƩXfqEl}"nQGY(/!rw6P۾F>-^M؋%m"D@c}Q/ƂJevETX]IK*XX[f춈2jb8_q^g3-u8?b[w0I#@++y`P6$HJ$ ʒaB1V*wvq2P{:> kX(TŧmAp=G?S~Xva"a` 04 ~8/0 \ҦNıtS+qn:%esk4N_VҀIZ߳4DȞ$RgRFQ| " ˘L!`CLPsEIA鐓@8{/ ./5;%hiAʶI$y~NaґP*̺R d (ʘNI݄zL9̔H*G! '-!/s E'' 4-)Wb W)1KM)EJbok^TȄ Tt\ d 1Jcc9^cm܄͠Jہ _nvχ/[[ћA:  $Qov/h= En4P)@ /\ pڧjɸN2zF()Ġ#.R4().F\j/C+4%28 3mցs̫=X>;^`>}@aׁ W%zs-p!?؍!y:m(mDL-\B*v :i9U( >8MDI*Esm)=JS@z#jIkB(7FsB/@ԇoR=vw1cO~y瘆lf%]ٗV<$'J*zxA1[mAc%DgSJxveUn[=zЅIx d_()]RGOElf70+mٴ}{hW'SZOWtxeC6U)E$HRQ'k<94:Hw@cNQ %Aكu l(|Y+:g$vOՋ2j;nl,_72 }V5o1h=٢c3(=KN/Yd4Ibwmm/z$4IලAb >m]˒W;<Y-[h[vqCyf8FJNQ I/|@LRE/7/R fÅUHv_0)!BN Sq0٣>ouӇR|_8Qp~NˣuĈeH orf\HZ/)뒚vRpܱyeETxF> k߁e#֘C_U>I kAm3! Ύ vhLELGnCd8f֮}e&7cNg#bnxަ{Л4IavB+mՉ}y>O'pm|yp E/6Cw.y p빚DPIjh,}Gˠ_Mx7&8D՗[$r`.neWܛwȝˢxTNl[^ 3_ AUpu3><|QJg~ 1p:LyXK>Iا{pO+>'bj:nd9K THЂ`[+MnЦmk ٴd³(@Uv+m(սgmBOZ4|z6}<.8ߝoBԛ: 0a.xMrw6W &w¸ٖI{`1<>9=|ڀTId|T{>bbގ,3CY|軰y00_0\/Vk#`6@ׇF0Ei«]"]TDk: 7̱3/Kf-c.@)RBK**4ȿ83Iaa5#Tx2 ^u]Oq.t`2X`F4tg3 e%2ckA!Jm,4k>w5`׀IcS.5])b&d͝0?0/db+u"͓u3o-[mV~ ݳt Vo.^X9C) Yyʀ>Q~$OOǤ/GGU&?}χ]ݾKчUN19јptQZȈU0ZGñF lANED\TTn!&S 1v&eNp:tV:qzӇ`xF'`s0U* }F&OfQdR!Kq7zScz=6$?w1Ak'(dѠy_m%>7– w;eH]I$7`=zpsgmA 3A ԈRIf 6ۘ'z +F8DpQU]&37C ]5#)h)"HMGq=>*dހFjn8iE3.$Lk' wT[Wfo"DH2`Z%J"`.ލf ؖ UzRbJ.޾L?=KG6kK QG F"zE`)"*aVN鞂NU4CJ#tK E 8`@A,UV\In A8_.%?fRIme4XR6qșAJMҬB[DZZ m{^/uWAqlV$Mge=$%Q]$\ا<(wRzhi8) C"=C#)bQ5I8d l?@0! ,~@.ƌ0o}wXGpZgIOed|)*z.3S)Elo*-כ?* 1DN]R+}.Z+1810>QuZT,i]Sh ~BTOU^O^Nsjq?:˹ io60g-I5Wo{:] YڍDbi7rBpT'4Ɋv'}O['^ vz]6Vg^')ZIs`F*|Q,..Fu_*-//Dž; o޽M:?z{p:|Ãw f`2ۘ7#@o@~]kru ]SVZu3+[s yIGK$Ln%@~*~=j'.m]šri5#'JQW'k^QQU4rF; QSUaGj㥪P?G4q/ o&[ثl )[b-I  (A8l띏V啕-Bw`H X4ZʑHT )aG1cQdgv{:Ytd6X$yO6ﻷadgg9u+N;7]ӤΞSͲF^6ΔgZ}f2v0Q/i)JT0'R8 Ijsآg쁲֢% / a)[NFpQ:b £`QGCJ9ိf!"ropoR@t`19wF/oU r=5.pan] v!nRїgTdA"A>^eJi$>SJ\:y ϚR3TbS Rol|C[.fD ':LDezP@*s#CF,ҜAb$H7(tBI [VV۠FH"I`ciyJAbjؙ\܍(ݦ9ɲ.P&Eo+Se2,Cȭ2d&[ lkCt6KTQl-& Qr* qN )`8(o,h2n$FǴG<g4De3PhV :Ȉ&*HΑC*†hL3Y6rVel¨x[2< T+I# )ݳ/&eդl H6!alD& p5O\Гv%GӘ!#JcFf( ƙJ]hmw(UIuRA`)MUe$iG.8a P0WzBHm|`wE%K*e#g+ niJoՑGsԵ}rk=/5;tDH p 9R]Q_&9ru)@ #*)dF H>V7!uz;}K4:CYj9\z`DʛE꫋qfW:ކ"6qSh,>T1nQ,~&҂ݫ!eK_͟ÓpIP*B_ndA.|5+/E$%[6o )X)QIΚlT>S?Zbck]F_IA֍~[umZhͶgZŴwT ݲBlx2ѦՀN<{{?e8ǓG ̅>d?wtW'\?|8i5; :8)WDl޵ذE mS𲵳vXw%'o;~u99ZZB}D[EuNylv|,_tmnbvvct_0x6ZA-.d4-? vQ]A&ۯf8R逴qp=܈Zb iuckѴ+$R^N KݘYCҨe㐥F%&ò<]{*'<|TuD(ʉ(9!>PRBaY&reRRk.>H)5JȊQ= 98B2k&BSaWӦlO}&oV׾k #N1/tM'?֠h1gkQ)ϑg4::z^꣠Y0BR1Ep^yG2 ΢581*۠MHuUڔ|JFgC!K#W*qY`39rує|8{5lrݖ#;:+A~r}Z^NRt6t_DŽ7FRtaws=-hcқ1f;!lhd3ݶwf^=<|WFnׇxe1{Gǣ_PꎉO]|nzK:j.n1tߴ]D-<>N~ͽ^?W9:}s}ʚg ,򦈥f6Hh5MI4J؀ժ {jrc+H ݇EmJ{J\ @,H[ƣ@7WZC׵1.˒Hf{_6(̗-?I,}Z]dpW >=ˏ_.{RaK۸nGk<6\E(dxNsmw^fou Yd==ѧ$?|[Dʤ^lG Wj4Zx0(qBC83xF߼=hA߼#\h`S娜6pg= ZpSʪTu>}N,0+@G-H K+#h99eN28_.: _gz`&`sJҾ;@O1#8Z&(Ky}d^̲4 \' rDڋRB)ZFa*Ax d12`5YLeD]4BLv7e43!ZCiNN@9G[`"s}:r+J)lWb4Qwc1F[MY=zP)RT>f4TɅ"ZP!qƎʙU9lGث=lA3Kg ,\tfSͦVYIfSZKB{/OGʻ_dF&nB"H$ NJ%J{V@){Tm9ƌʉq.fAO .8mc%|嬼Mp#=/V j#c5qUaa-x(Xhz,<(]f_vwqiɹǿϦWB+<'2pɈ$,pLxM2ycD\UĖNfEP=O 6(4čDK,$R&Lq dg7bxW\cAjPvQ3moc*͜<2.}(aޢDSdDORNB[-kc}@"!WEbsDF Da}@ne$9-[Mb*x,XM?PGr$P8JWPO<] 6tQ4}8c*VC6hg0bp*ɭL.򤅲[B pٍ~JpqwLgմq{"b7qŭ'Z.q- s( ^m R Ddф4q"-eqx4睵x*|*~_UmAp}3E?|{~ǭz*Թ٧PX;qJv9X)6LЇb20||9BDQFctACY\gL>R5B #; 8GhSr,9[rLas]P4KsUz!Je<9Ą 1@ =Z/vȹ &"b =I^:>w'lqٻ0€E|r(A^c2J$=N\p}?㤵^ܛ-ה~7x^$59EpV`ʈ;[xk]}i6$QJgA; 묱ص{mL1ଓoUgߜ]̊L?uIF GS&ZUd<=іR0T{5j4"F'j6Y}F óe`ז&눠1F4&X)-jE]̹C\H}SdYkdMV(`#\ae2&FMn12~1CxK[of^7}K)Ju~t$hM>C} t>|]` ,Э"σEUcFimmXQ:GધA_]M#jQS0(=-8' KdH8KFek d1^!,PũlQ?v7wof^newVh=._>ucVƞ} t߲Tѱ@m"$-,Adڎsp6=m.=QZTݮuF) 87$'P;Gr1skHOF]8-^ x2A7^}}vfgStZ޳|&Rp wjNZm*偶6bO%RsrhZ?+J_C+X8UIac?bu-_<7Qz;t]bjN/Ê-kels[;ٞY^F>ކ?En2F)Fh㚥% b{"=5+AӋ~ϙij ̜hzY="yya?/-Av2ge3ZdҋDPJsJ^ Yd`d)frnJtJ 8.c1nnRb4,s%LFmp[ôvk@Bi֗c{_ !Fb&pD(|C =0<0Da>@^Y$Rs2UNN3ِ]r(O>J:#p?w6ht5%&>6cnC4b4 fOb'CtYG0:I.Y `"c1`RSxƓ٧CF!OYS;.HϓLaRV8N葧 ð(TV3#Njz."RcHFZA%['מF 90ЀwWzouZl}G P.p4Y/bAw ޵6r+ٿ"-00X&A& ~d`i+%$Y"%m֣mKdf>#z i@HŇh/&w"څ~+f04w @zu1_L{5ȹ[8J[x%BW<Eh?V DXn?jd<0?_t]GZ$UnhpA-VhK&x( )Ro}s[ =PAUXN$WR|!0K鋛F*Ҽ\}jk\ETohӭb[$Gϳ?J H?=\ˊ%bn7" t`4*aL+joM1ҋN{QnBOfv%Cj8O^YDe}mv5LG703ΊʁAc0(.Fb6_\|+W^"yF]> L\U=A# LZfwsvsH*Z>_WunحtԞdB]fI{F??΍Qh18X<zV<ټW!7d 竆)Fhotݷu G//<; ˛a3@Di0=>?l~v(6>Q2ӹ<<" >3:'<]K=-O']Tef1״U%OsHC׺\ֲxt6h8;nqƳIЕLO) g5#Z+8 4E"IL\ 6:("E;o~NS}yn-:#-YE5p'V&ZI BmK"#‡.UC;$ŋrf7=ٯdY,,J`wƒ#uPJ;aR<,P}a9a(%1WMy}PRGKƼ+MdEuy5݁h'FȋGci?D (B)n/⠩BF` aBR]rc-1¸ݝ컛Q_rv0O7J(Ɗ΃V\ʮGbXT΀mBu nRd꾀[]o5]wv}%UYjߋ4 XIf47OG!}$0wRywin" E6p=XgVt3 m|}?CypBkV,;eV65ׄ0o6N*nr>  6̎l ޵ח~O /囹.<[(,sylѯCk|H^T]7ἁD͟89RLӊ|Ϡ⻟,M (ȔRM-M'xh:UXh*JioQ.#s38[uIf3bG*^&$$=MhB q}O}Z LHSK5>M JESKh);tM-l$4 GDW#~4tz,t2}tP [+^."U} 4%<̀O҅wˋޠnNO90?dwIFӇOq]%"Rӫdpsˎ/ e3wo#+Xˣ+@+$?t>]HizLt+u4tFh*益Pb ͦ^:"n n Xץ-Jt%Z:tJ|`0ڜ]2SEX ]%h**T7HW !$h*X*}UB);-Zrڗ`}< /"e/Jh8tJ(i]IҜsqpIHSgsf!<({15CoG.j93&$f`b>ah={0b8;2 X1sMƃW0c//X C1AoU? #c)VһCw]g:y#hz7;L\d%2@JF/tJÃ&&Y4I~IïV0#*)dFR+DcrBt3,f7 Œ9K˭̉uX #:ZH9;Wc9ZHh}@IkExHۨTR?v]bˑo.#F^ӔbūC .@y8\#R (XTKRNU{W/FqJI~ӷHNa_ɦxYt3٣Vbe^KףOahxzw1} TcϹ`%$5J`r*Z#戸0{XF]kyO~?k ?ܟ]M2Mߍ%ƤZX`4-IyƗyV|uR&}aZl'@-]@t_c[1"g4`[*$T Ĕ"k]U9F7HE 3% ݥoݳiR6[Z3#D?S{'@W! r𵒃8qrFҽJw͠b~F eI_: ߔ\_.C$XExt }Ld k b ܻ("V`*3} kL7iMfK|+؜EH.JE}C8n4hپGxgZ0#SR_6/aN(6AƒBp2Y5{X5UlE=kߪy VD(pZ[ýzopRi9ysHjU1cB)7j<[֑aYqGkZ ضb,t@R-[kHkh)H ])e^]]zRNa/Q'^P }9%hfg&ة>h)K'BX&#$$woo[b#@[ڵq0`y@se-XmI%\J1묶VB~a oo6{v֦^VOk*o:ƼтlCP,JN Q5ơ,&QjF,B0#BXYL^jʈhA #(P)c"-yڠWV!ٳ|04NǓnΖPz^5Bwv47˪L%-ˡ˗{WG3{Jji8l%(h縈GذUP N% 4* 4BkMpglrD`p'E2-V{܂_E޼ɹj{_YX× &6FTLD ƤSH8Wsm%+Kp},EY!r-1t :"hYr G,L`$LIFSisoř~h`5wxt34M]>TS^`num;>kӲne-DD':4i@nH &,q# !i)eCopw.Чft],*4v8:}w{6>,GUm62Rϵk\Eu=I y,99W?ML1OW!GG qEz0JSOs[Vq fnzk:^tS0$4u+Ry țEQ1|?X*b:hlhV&]r5HzU\i1E*^Jx3٠τU>UmWar2)؇(0,>g4/"u]Qu5QVݚ*\NG*ѻAvg̀|Z8կ3!p=s??~y tm#GEȗ6Y42s,Y쇽 KE=a%YtI:@XMEvbU,>LҼMs=_͞V[(+wNͦ}y.`kRQ!hIq&/,мn^ ڤwc M`vQ 뉿2vɼSɲb"j5͡~y]vųO2pxFv5^+CP@FH ,E@#B4@2UVy8F,^xJf3m"'SAt%`qZ `v:: [aAr&PqΦW80fW6kn^ڽ줳 }1@ytI{9}P* IC3I;V|inbXE{1J].kN6.(b㳒Mr1GS8S?whrW}ȝ>|0NK\b.l d3=>%B|2y)E΍rzkpAӒ֕g) "UI%砄@z2q.IvhI%ڡu@!{UV+|j|nl]-NB_cTY,lTʡ Ś hf;)+`I XV'+YqUߙ83qaiQRG&)r4Ĝb[.TD0 {u# Qj gGB$S2*xAD& &99TeLZ= nڣ3r],j>O GNj%Qˣ]/7sI,+ R 3xJ}"P$2\1{YԦ3,D#UL:}4vE4) kPXeʂ*l)X-Iz4R d^21)NKg-M{oJ `zILI$^o;30P8fT>8t¤eHJ@eJvw" &HѰVBA"HFӊA&h34LeΤEcb:荃Mpze7hLI)2'?2igG?)ޱC1-o6qvѕ@r1{.aNFMEo~մnJH0&u(\'~HގȂ1m3+!C1.8`G֐ƒfx&7G+ώpO0 "#c< Ff&fnM7nNO$ВBYĸy\i>ީx3&Ҷh_o]x; L1x9:{!r֦fvrjkI}VڜG]Z_[/ݓﮯ?x Sn4qKbiOwk+ڮ8GFOe"[./cF6#mFl2qUfy%pzO]"ObgcW[&gQ>bF4WA娳Dž>u~dlܲm#>:3N-lOl<=h;zë_P}ݻwzwq@+p&@=_G{qUkho1Psc\kNyø[Tr?]4}?rw_mOa5'ʔ3x4 WVGC/a Ajf s" kY14T*emY7$y|S 2G;2`Od=6ِRf9L&0xdȠhoTd9k00Q ƘW,g{M;F*1&)7E@]@1'61mF_9 ಇOT98'2 CG15Ъ IE)~>*gNR #ɰ` ~ zFJVf@R1IH+'ೳ.pT%Ƈ")}-?ߨا>GMsX?=h!2S9D'rDAx5NXToH" doy!<_!*3Dd*qdg$!IWPI GWx+E V^GΝ(SF+eD >&bH˽I EfSGT.9bEaJQ,ʖ9UpRk.>J1u(ˊQ=L;Xuyv=ݏ潛MNk9z(<c|"x{/UԽYDhWT*ɁWYf!ZY!]m&G4UZDht F2Cp\)"\}It0%A2FΞ%kӫ˛tD˛efxY0Y Ug{t,!GfvgM!vg\gs5޿󋫶ύwpw/[I2md`s u̎ sxuCMz{َNmvm]ͶCjPfέwmy|xw"`{r;?\oot{i/U'g=y|!#wf%vɚ/pИ6>.?>a|w(&6dK͕y@y.9$Pj./&5],P4J8UܹV 0!fUo;f&FA D⋩z^5Kz^uUωJ7XJFQlͼ WέZxްCT0 ' 8!Cf=Smt̓l?٦M\ *&ڸli1lV ByuB#fU@ϹMe,2+&:NZVF(E1 ^9[w6_OfeS!@45 yDpypeZDքR6sċsY .Wuy鑤Z.X1iLxU.E6H#GIXf:*kdij"I|Hk!Dr= 8s [d6{Nz/*14ڨO2H䜥LO{cG9_F92HǠaA#7W\ֲȪ"i !_\{s|˒"^|+ΌX3RKӴz$Ldbw8>vQٻr1{~kO:㶌sۼ7zzyi˳r JzkX4EĚ21t2Lj)j19"aue"% %J2UNq61x(&H&9Jd+_Zw/= 1s.T~=%"}>|TTն?zrkx]n+",Lr8zo#v ֱOy Pi@0WƨVͮݓZ4q5+EX{w-/堡`4Y-UshXM|Z₊jKy^;r-ȞMӝ@Y7vm8f_5`` %mR_V1X}"eK4ޜ7}:*ԶduF&htT0J'ڐq#h3icagˬ▖לOalz%cC4-?#foʅER{pj pUgbL!;cP[VD}2%'oU;߲5jwٹ\ hAOI.9[-M @jl2&g|X8eܕvde'\6xwL>q2zٯ'ǗW95FfNT)gr,2kSyOĪtd(.ƙR m &Fmtæa#o}Uv-aUS%vbdzӼ1Ŝ GRۍLm7S{`eVfbWL߼ C| ZuĐKV➔Z%e7g^%!EDPfѩ9qčb]Kia!J"I&ņwM<\N[q. S!hqW";3$J1e+-$2Mi9Dfq<*آ̍ _قum9&E&`Mq)LZ562Gs;}Up:kuf%sq D3yG \tZ %lSVQkctQŢM('"cŇdwew!Cs7>FLc*鷂D?z CE?^\Χ^y3Fl90C;XG@mO0e{=q5(]D{_;QÎw1P5)V8)DЄL.Spy ;:I:7[~{&92C;Oa41?tnQ3 {5n2 \KaQˈS4,*Fv9paǦ#%# W p`\'aqZ p\Ri ̸k"Z`5઩%3u\53 4Z#\`]7jrWM-qT2ϸz2$}GjU Z7`SaSĕgn~fȐ!MkSN_>_aªǗ74.='\ojuhYT-}qNs<{ *Si>YeMZ"H8;-_d@锏ӳ˫1jzUATuzvlM0b D bvR.阗g'L-[,ŃX^^:zw?r鿶 0 (/ .?61?.._˫ukSRxh?W5cCVއFjYgeVZ;:`R~P5-tݫnogKbV-z@ ͂CrN*^c{42(ύ|F)us^{!EirtFԴFMEV^/2O,S^u^.>OilO/V`7sƶL?R/aZYTcNKk3أUDzO9?=ɧz{ a Z|qV⋂I\9o\WAӽ&u/jjyT0{Yӂ&\ZkWfǦ7G->~ .^Z{?S9Tlf\WwmzP9DOQ઩E3u\53 ;W"L?jr{USq%*5WOWiݜ p9sMG,l]κ{+=z4.'j` ZIS4Ag;•&np2v^θzb`==b#\5zUSkij*݌+ ^p~&"{vM-㪩Sĕ`{WMnpn+Q뀧RWOW^GW^ \ͭ'oTndyq;pǦ#>:Ћk]֨yW7=p3`n/ :t&Ԃ:ʩMg\= A\Gjt+ zUSkpj*v}գ;U \\5USI0)++߻j*<|zY@u{9uyb,w.ހe<\WBn2ToÇ4 \!w?Ad7Uϣ/A.c4ZɁ =Q@4 A9ۮwy֒|Fn8.J~__j4r1R*ghE7Z} u2DAA 2wdqFs.Gu  !$cTʰҤɆNj X]\ܶF I^w3 X іڃ֊Xf :TF ,9IJjѻyLPXvE,dbN h)FB)u{rۏTlV6l+)h9TD2IQT^s`KILҙ bcVJnLLţCIk!(r{nhcFzZ_,(yK!f&Q+1`54 j#֤XACUJR[$1ihUdBٚ|-Kʕ9z+{H֕`xGh IH"+g]H)iM V`XyJ]#kP*ZJ`UYBsPdimUw2 䪇,Cm.}4x7&Rsi+u\>.ú Z u|AjRaZ/SKӘb$T!ŊHV٠B|(]4 [' {qXO`FI8TIkjf@I 1ŊUE*`ԚUetTbLEFʸ0A`ex46k6= )6آS5k|kY%ԉa[5eHWƎjNŋI, *w%,2AD`AM|@;;*$Rl)bZ,P *L wB61`N2M(24`vr$SJx cv2ij+E2Yl|΢ &[łq@gZ{8_!e'\cd|vaAh[kԊTbc>Ṟhby )yֹV҇U%;.b!ՠPwSAd\Q ,[ @`5ePDdBEAq0 fpجD y><5%Ax[b΂GA' sGX!!.A d_&j|Α*)ԙ5{T b;XK9(fIA:X6%ddD]`#)#mA{Rl:IW* qj R]UW.!{/VZ#. R˚7nH`3/Ր_HHIw^KJDA !ȲZHPҰ*T]FՊX!s Ao236NZ\[}ńEDZ1LDr|T1lb"t8!bEۃ}fC< Cg.Oo鎖5臗*+3[=U&|6C`-$ >:ՁKy@xc#Ta:Ъdt4Cޕf)TUFL1L: !'`Ge %tB\8gPԒzHiD(2ӊAU &$ejEP1:,zOEz0AG%CknuG͐2po=U?PJT9wVTK*a+tmXQwP)mC^AJ./S0G!k ) 1E#`JɃf ns}@h'{!H;@@]BZx f_6D[H#XV A{YI£j4ZIP*ܓEi*5AZVD賂*a+XHhZVH"tU> ~M-)K`n Rжwi;Xq=yy1輸{v۴>pʤh0VIF5z-L`-ENiVۊn{}R GqKhң6P.=z~}4l}˃ʌ8*h &% xKdɡmc*樇2tyr#bs"NJdW32TA2 5@rL'2(<8HXYaKl+Ok`E^QH"NRecA\$\u[ 9,RAyψaèݡPʢ#6:&ݞA,+~x.$mDRc5S{[N댙ٮEuЬUeHP9HUI7 b2 D 9ks@:^nY%aנP%Ԟnb-”X";i*>Y[Tڠ@vǃNRS0h+̀|1(Bbu4ASAnQ%#p ]шw)ЏL0H5[h#Oaq/nϗw[W{âH=2tsCJdB~~ܛիMHpw!v9 CdjXQo/0yM{qO;Iqn2bޤvtZI]t.K-/Bћ.sK\ryxB*.𯼈/L ˗R,JbrԛyVċͷV[7Wg]Khb?C]N~駗h㪭dmz :e"Fo W 9Wf@TGʍ tB6}#;HuZNB]hw\ (Lz PF-;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@P'2%JK(#hE8Wcw@`u( On]^NW'ɢ49/ogu+p=?ieO4w&Q+ n!5XC0(ۡ1Uid;T| {K寛?q )aT.z[#Y8d5^g:ٽuv{^g:ٽuv{^g:ٽuv{^g:ٽuv{^g:ٽuv{^g:ٽuv{]K#MKuMq͸ֹׁѻ׵tJ{:xB@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; :,rl s,Y qN D:E' v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';N zTޫN^h)znn_ohw,OvBˆ%&fKeÏ޸qK j;klְ5, 7c+BĵS+}- QfZ ]Z?z[8)*XvbJV=v"0] ]`DS +}+tE{tT*Xٗ8j ]\+Z+B;+Byۡ+ġ" _ pd ?RNh&mt͌5 ;ЕeҡQ`+6Pt%鉑"ʏ1LW'HW*Z7FkW;BBWVɱԚJ}ltfj ]ZkNWy(GpSh:&PYًஅS2a/@uG&9ିc;A.\ |- EC4 dڡhJ MZNӄKi'5! U +h$S+/!dj[+1=F1] ]}+)Ԣ;v |G \eeK8@3tEpU3tEhABX]};t8\::]ثc iN(X]9/z{#+t+tEh;]eX`:R*+ PtEp}lVTN ִT N1NVЪ8v"2]"]8;j"VB:LMmcn#ذ./UOܻgxy;+c`~g˳Ϩ~:|1XM ؽ)ܛ}&xlX^ŋƾ] At/QgϜ7$hՏg/>2q'>dON8gï7 Bԫ]fLK6nPtƒQװWȆ M]_s,Jnys=jW4+vДAӺCκwWW?L\03iSgؗS"~(YYY٧LVIS|gH]|Һ TܽSmp~-C< r,ʜcUѢt[TokT{- --m+K ։/-Jxia7Adi9ϯ!)yo{>PV[--X,^MjvG߯U,jFxJ >w+i$D>o[GݱvZf@6 '>v1y9!g/7֓7$EtVqDrmkEgGh9PЀmޡ⛯qV\RL7N[2A m6zGυ&ʚ Vąwcc._]1~}25q1}7͞BW6Pezr^]Z+RLW'HW;+[@on:iNWR97! d3tEpjhNbк lb;8f/Jk Op.3K;3wj7kW GzN( ;U`ҡ!>} =IŻ= }NTTZDWةffІ0v"10] ]hni ]\+Z+BFOWG+#ظs5d;₾&= \fn]Ɗ۩1NhU r4CvM!G~PN g0Νb |,׸OYa8lU)k; /x:sv|FĪڔ[ޝr{]޵#E܇ôEsw-v>$Q5%${YbwKjے_rITȮ׏],hivE{^\EsDxu9yX|gh%o9x7z>S|\_9 yȇJU+Ml'tn򽡼oPk z l\*8ty"YA?|IMpm6#^ IӶw&ސbbqܱt(׿}DUf:}fvʳ/ϥ5? H,xE"^Z6L.YնmoGg3 퇾[" ͡cd]:-^ɆgQ\#ݨdjƅ_bV87nr7OKsr|̴$q=7gZ MoH;7xr*ҳI0 ,T: ݀j>(Dѓ#([<<Rp*Ft%mZt 8|cbY`q.KZJE ! 7v[Za|pwHZd+oMܩ4'O-.Z;:Be,?9@ е{h! rxR"o*\ rvA8av>~8B="po~\9cB1J x1\т`XCJE5e8xRzo5B A$Q*K̂ېgt4x @n##ع;BÅE:[<Q=~ _n{x,i<⠏5593-3"hb蚻twu[Ľ"hrA-b@τ,N,{~T?寮Dd&9*dNp׊#maf&FMV'Y[m8֟p1Nq̖MpMfɔ&p%A2g0GQ`C"$R BpɚPxr͟7N鐂n{I=ynw;>{?z:ke'ƽlR^UR!E.u&o4Sz\_P8ɾ"|Pd0  CH裈FDJeNSFJ#5&Hkby'GzN%;uR,n @ٓJw˺bI|<[Uwnji\pDwɷ}'d>G"Z23(P!@:@ԉ"rFɅ, O?gX5=,Yc孌(d\(0אƒq`lUF%ÕbyJK6w1g/^9$h;Z5*pdaCNОZ :Чv޴~>(eZ}%E}#IAjXL/aċnr 7㎃r̡̢p<)M= {%cyEKa\1 `n78>s'gǼ'd93Nvݢ{R)h\%E̺,)PL,e铗M#ߎK ?.ѫyj˨Y"76%uj>d&r6N/[vvWt~v&_Be]is+7z/uB(k -Z8ҸX噏XTru:"|}76jei*Yp@rEhy|Q4gn3v:VfvV % ?G9洙/doBѐ'4)9b?;?bSky<ElѻjZL޽:E&S8k34r4ϣUfxSw{up10oiTkHKt)z5~S9.ᅐr%T油ܮCRhZL)]{cT3+2(Mm2!z&֗NFig&0o;%~^oJE˙mشItwYqu$ +ܗ`S׊7C40~g ~+K6@~rWG-GmOx?d΀˾Of8-o?|@ #şoِ{y \rMu>ةn-@nl6okuSQamYI&*MLPnTISޖb FL&Q_ȷ$,;.t`~aua6JT>gJj JnVZ4"B@-2BtC02%yEBHƃk#İ-ؕGۇE^dޝL%RC@2A)ys+A)xӃˌr$;@N,loTn/@Zjj×n-R,)JB*BOŤdMYU,x%}UF rǤ8`]OcTz  gQ R)eՁ#CfYiwh ) cK3~3DmcLڗs@ǘ0q^NJ FVǾVgZG!Z&I&F'iywr磋n=+O=˭IjPqBA<*$dADa`< ²%Q2Hca4 ыz6.J +HhI;Tʱ n8iɇJ s$T;j e/ RhI[Ij Kp`DŽl:NgsO=ft[s]WDY#Y(MK'gg\)$QF]N`!nP>UKPN?3>Zn=7Y9-sg$"GXtTAȳBB8iI=lzI# ᴉH‚L0^ Ae&̢o}KB#{!5; 6Y-J!2wb1Q,F30$i HAA5h[+">P7yu_#|-&Ȉ(DLf&AL6IĜ)sn.njѣzl*.F;jY;׸z,5鸭vq5&/`ۓR@dgJ6LwIlNjS {U!JL Z aH@鏿ϻe@Ж1$94YKYjQ#:;R)1熃 t"u#6[PXI2xK ME@X}G}s] kzHK}=7XPtPXa)׊dJ)f>f&%5Egcwb#taYm EeSQ"Gi,M6K/uREg 0, *`{ A0P}ΰ{' ԭ ̜ ļusQXj=qkݽJ{OR@M?WuzA=ք&4 ӿ]J XXOXwp\IK7ǣ.uNO=unfHp F޵cٿ"bl>._Ent1| >md#qdٖdYR\AwJ)yx.yyoUNc-(p1#a5IrGT!d^c3M55gDOwW. `oa>7mm.,/Wćg;÷@Ȋ5X^jj+Y,b?-$Ya4zr[g]]+#k=ZW=ZI>KgcfFc_'*Ĝ"1=QsXk 'h =߿OHG~<}ӏ>(ӟNӇ'lqR8ZAó!y3_)`%+H :WiM$ BPJܑ,iiZ4 pu =Ḱ@h4S3]gEPĤXtۀ Aznl $ rI@e$kO9>*Oӆ2tu> }5[*O/&/G+_,=YTJY~jg3^ 5'\ڪJN-%hK"y)Z!/EI,9O.-X21梒%MȢRBR^Ĕ ʬb1J&ijb (w[ %QMnp2gb,,/00E߃^ojUŽoIQVˬj:R^ߠъc7x^eF"x:>Rt޼e]<xqcSkMa:"AuaZ_t.! ߧV!]?j^_<6pH{O<5 ﬦub5oo~z%/ܹ?G5ͮ?8y̛0mvQ#ϛh٩c{O&ԼkvѪ;份%d|:});d}rl=9nMմʀDP,hfTImw HݖVtvjB2 8bh!MR3I L;h^s(12e1i)mJeyL@'+76H WƁ>Gd 1jM6#`A bp7gq)/Fii~מq @T3g-kbE0 JqA\k҉Ҍ/j<!QHXl*qQH(D[;0'/ZeF GW"oHؙAsLIbSx7XKi\QfZs$xKC!Dpc}BV QJ3c18*帰Tؖ Ua. 㻼x9G&7?C*;Œrb1jDxmZxM-hdpټޚ"mpjT,eqy Mxޣ%@q(V:w)83爯!/Nf1+ڴY<VHNj/]+) jQwn%c2(cs#.2 D.x5띥zǶ|oLJ;b p ~ܓcN9w395DAUʫ҂ #i򶩟v8pLVӾ>݃чb(3欬\ -U)Hä!57:EK/3P/g-Aw]o,?X2mgDŽQii($6&dK$hyMfK4Mj"DW r0tʃVdDvt P ]!`.Ušz(껡+7GJz`ft$]= b^6Еy]}|DWXI~0tj8BD*++k]!`U2ZNW]BLU|@*]rB@H*]F|}ͅػ3U3<;.8nN's ="N?!xt5rnC'G3RooLSB3`A3}> fmtFkiQ*QPCW<-ִFu[ $á ]!\NաUFe*\p9ЕC+]e#hm1Qr++ (˪gUF t(:zMt%Cu4"v7/df{Add|Bn.,js޿GuoE;y-ba(i{R)T Q<7ByD%- b*%9&h)YSg8imkQ&DiETіJ)HL^ 2i#,;nKVhtĎ"⯊*WP9m*(oxv9#ڏ~5;Ȃq&;BDZ=e;=|`;RȹSt 3h܁DXV<i7IѢ[ m<"DǗ^toWc#oMC)) LƤ!3I/eM*',OsH(D v1cA<*5}WC^Y:\5Iw+jҴMMWPIɔ'}0D('{dNÃMSF&Z1OoR "Fl F"a@GV<@9""Th NR9gs{nPA Rxn{fB%ȅ`I*K dU@JKu  b99jW3F΁uڴO.VdN;^˭%m:rp)@yt<ɁLe[(f/"+ڔzc{db|&N.5 5l-IQXZ-sTUf \y)X2° HUF^Xyl n,ɤ_!I?nἑ `z֩Vx-́N` L$."쿸akzARh:TRa2$ED`K`@eMA`tPCX c|~bv̙|L̃@Gq )]f5_xuq#_q9:C)-~ӕP.O ܆:-üO~M~ڏSZMNՔĉ`Ln> IZPcÈ,iIaϢ\R'(q;9v,Obar}X2#mdݿ!UtM6I>i{ljsaiiZg]V-Z͛G wbdqjECnzjDkl6_>Q[֭֜GŵFm]T>x{.0u[bi>Vq鯗˶;bNpgXH4VHJ?y0sL06t'ZeM'כ^܌9>/ݣ Iv5WARh72R>crYqOhD?tǴSC;ǟo?۟>wv? ,1ҦpEo&o#}_ w<4fh ꐡ ~לrǸ_7{Lk̎vmH_/x;M?qF_orrkE<.iWlmѸ/wrjB*V^n|drum4(?=h1dLһ$I)DN 9AYNH!84ѨnٰQy=mQ={5dQV:zif0'Z`L$(X&d3*g: {I1';>DΤK=jyNZa;lVH}~}ھ _T1BQΚ0B&`#vt<<G`Ɂa el7g0'{ Z׻D'h `Zz xtI^ݼ4gSZq\n>[YF])jIf]EdQY F'h=`P̣1O2xz7ra yrkvx3;>މMG$c%&r8րGB2y[ %CVb:pP>6K{ lk}Mbb*?C>tve#"FM9 կ?i~ߙCgWPUټ-5w`(?`eD(դ."e[AaՁFW鐭tT/2_QA"Zj rݫ)DԂ}uBN=ԡSP!= I8A!.uaIKRqQ& &J.[L$| \kmnנc9ks/'%qD, u5R( a9EV2zN{i#?e>f~FO&WAlBH !LuUl4)v fiTLQBwM6e!G?@' U= U= U dlJ89'eQ:E]2NjͅG<&wYqU6JwX&쇫F~?i斆S:WmM9yn';QCWb0c!`ALASe =>]h>ל oP=`4CSKsxz/u"B@ü(R1L@2 Β59iQcTQH&e,ASP\}+\d˜;Tb /s`bnt9ncw;RLǥ_ot˂Z:[l~~@襎c׋%7ؽf> ^o{^‡w=X}v irH\'D`}jPvujpŞNm|p5eC-5CU7w<睖|6{Ӈ/NMan W_qxʖgO2rgdڲffMhM'Uz>_kbZnmm.BZƞM~fi* q7,hU0 ' 8!CfνP$oj:Oj٪eQ0T@ꈩI6.[-`kAj9 w^X'4bV9͖Yv+"Ҋ`xuA9KF4Z@-hIv5rۛ8^ʏ^qk(ISK "8zN2i "rkdђxq2Ҙ!dja_CIʱIx=:"7HT"sM5Yh" =_3pΗR Fc"Xe 0dR;2L̵2rM iyW\Nн'Oi?V32\[/EvS֊ϗH lG //qɗH*E呄/ovȗwԫ79ėH|Z1Y4/[5܂q15(x!ۆ{0U`PwKZg}[wKG%r20\o &q23% *sVnI%*ۃ1C. oW %Z/K̘l.Ff8N%KS1s&pC@c wo < $$VH-7A`Iڹ>k_Plަx$+w/Hk->9.*QUow71jϴCyX&,JLR*a s2Xeqf4`sJ9V @ʒ R=K@ D{A[LyLJfFհJ=]X3Յ.4.|R]x IZ=Z9/rwOɃLkhτ`Ȍ 5A8%ȬhI1?{m)}9~jʱ}Wkeo OE$%:3Cr$%Hi6d4~Pk % 6N0+@7"R:)BLQe"g=bLCAlܱ-j̨:4U/C:r"YMS8R(m,޸%b'!B}pMY1 4CvI@u5J@F`VF$Q1092 zc[DԙQw!Jqg]@T,U!2dHqo,gش&`ŅhRBVD #$$8pFBb%#1)ĒX҄Ij$f.fFlG/,\OSYg6.y:.!$.;\\yŌSSI@ܩk^:,!k-S:\.yg.Yf< ,c£UV9u/-5~ʹ dxppyu}#-^ cK̼, iNcK8Baeh=֛bOK=(w_*/Gָf4y{HSFDD b FрUI1'4.sObС%-q8Cy |쵲2aDD&p TQ͍QRq鄷sb$9rv>pڿjKlbɬRh;LTӛ-ȽE9[ky77%ZTHCKK)FD߂3hv(ҺB0C;87M)W( 6dSs\D؎S{ l*T=JXުhH c&PfT3h`bR*:;79%x 8ՀJ͞|uW9 -)vEgYeRnPXOo5sؑ%Iti#͜(AΎ|v$?8;?y-c p89o7cnu5VN5ă%hbC0 'YJZAdBH:y/vs)ҽeVũ|{v[G}2mڂUy;ci\g+(0т11R Ε\J+&Y&[!^!ا:`x>`DX`zGXeD,PN9KRn$LIFSiN:v%.4 Ơɻ^cL-Vw8.h'jt&:)N: `'oka}(`Ƚ%?'?vz^T.|lpRLFPU$&:ɸ"yL ^F =H܃1 L\=A |T& eL77^<^֑&9=/w{J 4X!⶷W^)^X`]r|RtV*+͏nAָrcѸh1ѕc?Ǐ|"}˥N#3@1h~X~1SJAoL R1<+(j VCxMoSXTjʹ.Ò jZg5 fA6=a\-%úo>Ԑo>Xud{295smmڙ&xY*( gTa#Z+8 JE"Tf h$"<Y3_H>Lf.?ұ7Oa}T/< 0JJW0^4poEXD2-|Rxā.^09sxcFp+[~?1){ߎNR$q~ݨ/G4tu7*9 EE@(nD`NwIΪ-9wcO@Z ({kl6M /o[o$Ϧ=uwZPžhT} u͆wJϓ{1).@Ѭ`n->o>C>9[>'vuO[.CVtv$1d PLOUsbr-A&UH' Q%^''$M-;NH?ů~'NӜjU_jJ_՛{KTO _oMK^[@Zs k#Oթn$ni61MLR/e!`|O;h,bM'b l23ߨqbGf+f3uvo~lz r\7JX"0 E&y %t41ZY+Mz϶*oG" ~"4 \E"YKrmkxb1wֈLIGR2%lB̕z?F9;zB%`ޕ&2Zz;> 8ĎmGHO=-IRh+DPpqT!#0Rؠv Ii_i~eYu 2/.R ؋'zoQ#!:"ɮaeb]+2ֺIK,*o@;Ѭ[:ǚq!֑oޛr~G.@YJ}g &a+)dFR˄&zGo<_+sW%y&'kݧmig.Iܜ-n3Ҙ12@JF/tJj aKxs cdtӏ[.ؒo kG[硳I6ѵq1NJgN#{'g'vZGi?yz;pFVX7-{ǚ}pcFIEI4m:ۅLvkI:2qW٧NNr{m5Kv׻ʘ:/7ۤyi2(Ms,up/rL郫EZnTWuC.L#H9"G`#$'|(5CMȄ1A"$ Հ7Et1J.3^5`r=h'QTvhrGXȣK!PK:tJT2+}iU|qk00[gC1}WL|UzUw*͝oTv:J6;Zڅe{y~㒭#cw-fƽ0ʪ[}ҳ"ot"֭K[,&jl91^ cK̼,E4W̅L_&wšG @sYeXRPQ--pA8Pإ3k.Jr|,p*QiWo\v \%r<J*|pԺ7WR"U"X=3KZrlTtpJjqDpsvRĎWJyER`qLp|Vkp54Ltutc*+̹n=UJ1ҕqռ"BW1u"쎑eEtǶ[wEhu+Biz|Rt ep`jxDZy(ڡ[]̊ uBPU;k"?yϥ+l>=\#k+B:]!J =]#] :aUDW|zl;BWV#+ɤVcp1Rr$)B[giQm`wtVk{vM JLnGj%z-KlAh%FjΊj#崖Z,8lg*!&( iކ&@p%@hzfPvmmY$ř*!PBY1HW j+T5tpZJu"tuteTDWA="Zvc+ݧ]\BB3+ Z`P~V1ҕ\뚼+lT=K$f`P>Еر)J>\<Z~]PF$ZЕ]V]`+k+B{[Ltut%5UCWWVCWVԶ#+!ѳ;f"NWRBOWGHWItl* 9oۀR;4VLhz` kiB+\iP>>FV USBd#+tV9CWC-tEhEP>gwtefVDWXj[ ]!Z`%HWV9I6!k]!ZEP ҕܘFDUD|Jigi6pէ 7U;\ڡ#قdOWOzЖkV]!`x5tEp=ZNWRaI,VCWWZ:]J+z:BF*+,&Dwe׎ЕdJmD׹3|0Rpj%7BP[g󷩷94]djW.5$w<飩?Ɠ>ZÕnoӵ^`LgUsQԾ*Okĺ}] +̭6H0Mx1_]5\9沵f )M} o5X%DU@1g6FٚVo ѵ$D B'D2!,&Bh+u5th͡vnstE{Y) :yc\JB/ͯ9^'`q ֛ť7'.egoN_ru޾-v <|P&+cNvW_ww-'8hs<]|`- &9Td'Kex<ӳve [S3BbH 9jeOBڔ˫ 5O?JAɶ t<IJwlɚ0 r\#N?[="-Oſ8|5E;.zm큮KՖvhvg/R>>ly= W#3 vATw+'o3-Z+?X00~ٍ3Nn.wu߆$._?0x PBvP =.^˳Ϧs?t976m*, w.'-5q9FvgVVO?ARm J~f{xf/'t9't*2ؘE腔,=G^O`\Ȍ4'ߗHn*'%|k_8A*ѴŃ[E7&$n M#eԢoN~ o<~G͕qj~c/Ӏ[, e<6 /仒V$ECĮ/oogu#F?.~zW8A4"٢Kr+F&=Q\r]GOZ\LkRbt|H6?SA& ̥} H45e;m\.&_6޿_*[w'qmj ԝߜfٚA~3KK&1^O9nƢ??L/9L~4;jT/շP컼r[hE z;_ȟ_9#VԓK`3*h~>[/ -׸[_%5_`7.V@KXwy>caSIoJ;`,A1;{0## {Fb !`eDp]5[ Z}} ۡ1YfL}#l!go0"CZPf/Ap? /;&J7A};=8:߿CS;7kٓ>g% js>|WsfgMo1o|K_Zׇ^Ѝ 7/z>Hrːs,W%%[#g!HSo- FѱGaz݋.Aw[֦I2mGad2ak"fc>äc,Zմ vb%P^aVv~2\ڧc>t̃@.WVvh1$2_1Zn! ޻%muٚxc< M^纺[ԟgt7ZP2jGL*-k|MGr6e+E?L<=>U w/Cmي mOm[-)PVP!% $;o ͲE)8HO׋el7i)[Oćqs$Eߟ.{.m -vO^AtpR 2M*Uփ[6oOKHIba)\o=ŀg-.Oeʿ•^!2d%- +h` c|0,%S?¾Av$R؇KmD+FFsF | .焽$B̴>+%g Nm[bZᜐmVY*E2(ГRH2H+ΡNh w*3vc\J3EEmDУ ,[WbikπD toER'-UdE&3!k U0 &P:B(U0 5*?hkCѨUkJtxi4YȰ9xXQ 3 < ivO$ ?)ϯ}̬>J`MiIAv< K!HuĘ(Zys+ `X8|ŘJhJ͊p䊃9hzZu_qpv|eǃBzzq4h9}m9j!%2 Rq>cFOA.F%-DlCYa:3VCE"0\2f0h"} ז hJπ!x1)uI m %p ˀwѣ1]+'Jj') VZc KEd- }-S>/=tL^zy(J)cI"ɃX$:ɣhn&aXTe+DžM+Q\H^gH!ZuF^FQ(2[2e(Wf5ZÔJ6FJEmʤ;Ѓ䄵AH_x^UeԮ,+ y8-xGi/!d- Va[pCdi"YrB U+ڄaDPpc G!>hAjbVlG%| Xc/$ճ @@SC)9f6a m!YcF,X-;ݱhX`|^g #B2#kvz*r(,YEFiv%& @AjB9 "㬪p*GX f BȲ$3l,\'JhgZ?GXI;,QLj4ZI (*0Vӥ*^@^IaZF7+m QB4^q7$/a*t0dk\ia=ރv;ϗr'Cm:Ǵɪ]\G$YjT5,u1V$F.-z,[)3 JQ0v%[Yuз|SXg叞з;A.B+S[ |9E7SMp3q\9UΎ>ri޲\/>X|63h|XqH)!ҾNFi=Mk53Dr$SStV Q_*g<̑ J-5mM*ASrV@;v6q,B~Hx> @V=>׺6Rhss"ӣU8ڧU zG(P歗ZXu@in2a(G{̬U.ϏV%jû?.TMլRz`^إٳ8E T˵d%DN9 }^S:Fa8/xoyq]uO]]zϖe=}oQ]8wh=v|yBԓ/A%β!IPOԙk?,ӹU"e{Q~lfLSv8SH~Z&*+ԋ ~;eH_jmu/N6(*\BO 倩bަ|zr,|67z{;r?^&_/4Eqsy~*"lL%kg.WW{JD:;K˙{,:ץZ\8LRF(# dUo=_->B]>i7o&Zxۉ4 ՘!7'_Mi瑽7ڼ_'uI9[[|:I}(٪/>/ϖ>1v$f4aV׋ ^<&]fugV6= jPߦfB~~rz:&)]ۯ>ROǸrNn!zSQ(wnbSH0gﮒaGyf;3!(w\}>җW Z>~1!$ S'< o 1 v Ȧm χ/x#V "m^r25'a v_Hy;W=:.[5O ly\@ XZylm{y_&a1SdwכeZ{nt73&!3=lg•x6ԝٻd#漠1haV@d0x=T9U"/Pt >gz}`8ř@jCW`x6ۿ᪅OTo oroUݕ=a_$j3vzߔ>J( |q*8iK{2QRZΡdNhH6V8_oc߽1Xn4t{u8}.Q3,[łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,łY,^zg^_|RMw,v 7szwQHG^R6.9:|1K:%H).¥/_=k98X,-nDrrJC4"MtH#41ͧ8Xf|ꀘq6A0<\iRd[`zY3 0ze,`t69SDѻP` PQBdDJW1Ǚ<s7 ص&bqo/׷d2 uyvw~,m&_?&8Eyt4'/COnsUu+!Ʋ29:PdJ+R.j|Mee|Ȁi!iEYJF;Mfm]Ք*TD:uK%EUUHQfz6߀ þCU4tZOBiUڦR>VWs 4rɥYT{?y R 2D7+!~pGJN(L\9Ԛ/*nj']*!.W˫]I!DJJ.TwmHg@pn37b >β8a%YrLٝ MbWz .)1n{a"Np1T 32rb6uy3IS{ލ(Pj3(AdOTC :4(ydxNB!7(<` ֬RrꠅW78IU7fMHɿK7&DN=wHzPRo#ŝMV}Oq@8K1{/O@PwLEx瀲s(YʅR V2#DJ A )Ք / F!R\9g|(J *JKg4m|٣ eGwgJhi+<%=w6T笐Gx6)|ULj_]Zcͽt֎+}z и/ebhb}qk(qf]ήGIƬkȺ054' <$CXi_'mMϵ5((2CΎ` cZ4$Ihb2skw[He!6?׀`Z#EQ)D$Jr$bl`oh{[\W!f WY\o\? ?6h33)Nzt)d=a>Y9Ü~ۻޥF|WЛqr<;apKz 6NjP᪂aQ`dON^˹QPN 9ȁ7ܟL(O`PG{FpLS :]_kȥTrËA0f)Us{g讶N^}BnQfRtz}s|V"'4Wy= {!hD>6>q`T~6&3Zٴ&ߚ;oN/g,OQ9'҂k?Le3>q7 ֕ -&wׂ #XGrHכaDmfYᢍ0 g0bG|'c-Q>!7Y4R,F䚝0_#Mx_F㐻@ozYl5E=9ǧjS< Kd?8:o?}~{)3o')N6`ēIi'}C[*J ͇F3)Գ c\FK>q?]>V>12-^@|a? d>-f# ɹLW$u|۟¬< TXBlKB7a?ܪґQu$aqX ;9U1#;!M9 6ނ JƆka\ 5%t儖!0x%QXQ1MILi$!sݙN3V㉯/ECgg5sy,+Sj͵NeVWXAءBT`+aS(x4P;Tǧf'^'{q4$i$U%1| s ! bx/)+ s.;U[Eb4b1rvř\n#akq\`y!-qM'۟ cN)fSYIL&D\ܭn +/ Ĥ~[=+0Shgl1hA*A jI*īK#ˢxuRb] qar:YgFcrm9QM7Tڨ%EJ X%8RT$\¹k쾘Gl7}x5 72μvr=+?|OWꦇy7-F, "\`%ּ i6@}RmTxqA J_ـp1-ε+D3ʃqWB c)S҉ACzOV>9yY0U29ekO:1Xe F@IDQB#ahH Vƃf*se&eͲV>umS1,$-n!m;tB(i08i1ZV"A.m"WY)k]TCF_i-#->?WglP;mPٮ3I=Q_ 3I-oz!%p Av7WT!x+%LI1j\&Z[&Z&Z"')(ʙ@eE]xI`*ɨqW (jRRƹt[$) H\>)!\ZcLH{c҄'VWv8Պn?8 m[lp*W}n# GǨ ف.d| * AL)1oxU] &E9i+a0rc ;-uR^VĂ8"$'Ad%Q+ R" U 8ejdbo^Pcp[*PHV'S.ygҫ\ڬ0,FΞލlp:6*s~w7_T /3&h2k{]G"b1'S2;S6}fs{/H>srWꍢ-Lv2_h`(b%) V$]QgJԸXw-Y)%R>D9FABШ{*-U$Lri1 )P-c[R:dn2 E _"+9j@"ΗFf'< yQѐ53'Qj),W W 2KXEAk3tR^p1S<",VJ,'J\F%%J%Jٖr])U[&w}(PNzꃖCU!&EDfbFù`Z˽f6pjA)JFet1yO[ E\Kd=^&%4DEw4„Bd,FŰJ9YX3,T,<,\ ;fd|枑I^?M A=fRXQsb,2O= %Z)A[טNOBFmzp'Ӑ={8b+ Q'툱 e)/9% 㒊];ڦ66ר$"f uPޅ%E $`qxuy` P(-P7)8Iiʣ,e9F*Ts0!&̈́NS[ޥ-._G8i|\lBX1G:ŭ-eS b5Ν::Dy0DL$9N.C.Y;<#yc*6')gqnU gW%rIr"J$5FhCjc)-cbyK^ػP݌G s,/}^^.FQ)M9CZ2InS2 @MIMsώ40՟CQ挗W4^(((h\(GuhR<&<2bV:7{Wƍ Kv2pU>%Mmv«Xu;*áDIDQDZe6@[zhRpSj!ǫdy%}odןM< хy&?GE*&HϮۗ"ۥ ;ZrI8yEZ))3 \Unu\•$a#b UeW}ʭ1WK#\@R(q[-yˇZR7[!(wɻד񝏐 C5?o&gyij~t`yXUJC1xMRG0 +/0]մ0]0 iMڑ#2*zorkŮUt/j O{v̰rsjW[G K3K+k?@!~|rfU,aA|PA{ϖ8O(_/N-Be(^v*)Ȯ R `$:\|:!0X`၃R%,GҐn8`z)w@=d{"`MWBVceDsbI\lNbvFJ {q[xxΕӗbȻ.T>hl޾90/x?{o?XZkg׫kV,$-@gvHzƞTBXWq/<ٛsu^rXVa`]fCj! Q)v#F烫qC7ߝㇿ[2A1 AlE҂7)6ї|^71m/zNв!ղl?;\Qs< >.:!wD)uR.  [Nxy 2OIc(Aq6]I*iRJ6 |c*[I(;9h[".Fld-lK">FY, z(D7U :=e 2zb*l:$ aLPvR": yތ*XUy7 JTI 1*#Dj)Xo,Af% |P+*R a+4*c(ˎG< e BvcȻШT(d^C!:GMԭkb[v]k̞1KL32 6&l_ImK^m{f<C|Hva?1={srkqO+ſy*OB#=^M+kOgwhpV2CHT }g@埲iLRKVt}QEi(9 ծENF$”kEρd!Eø&e/x5j3Nyv &$!mP ]6މ%INz z`xw;^ ~)|+^[Q(T#A$l0!YVjկY= hj!>ZgfE^g3[hr}9~PgU3t5~Lt~fic)M'tjn[$%=<^1f-H]^K$R 6e"e2(0d`TAEGOޕ!Tku4ҳR(ZW"n l:=/bx/~=~`>e[S봭+F<˚#6,7IzP?r[٣BHs~z@iA{tp7 %j!=&CAJ(U% ±e^~KF^Q=c~hw}ȗҭ>_W6ݗ#tJD)HZ 4%Z;bd,_8 O M`2WSYj#JYe;B^E>#1:ItIHaDYO[ݪZ3d2z\pO[).їh]EFbHsB)Q&`4IbBRxg=kgDy J#jO*J)3HYR!-1ܸ"BF28"O溏U-:idsZgFzaw}MAFV"Xd߲kϭ|q"g<HO713J+Ubnj01tS2Z{vGna{0^jUF7In6WM?a|p-:)l8%̓wgt:4Q*',^iB$ _y3CJ9M3qLIe:O=AxGoOGes|ޗ;tzy*Efɏ=uϋ; w}@$-rɇtXbvc:K"|}W/]`dzy WN #$^M2ߚ|dutևݼZ3hb#uIVk;먓[۹Z|4MG5F꓿-2O"wv'wr֓Y*qs틋L#WBڹP(hpn Ii/_pj3U\nx03 *`Bz!NRK3ҾY7d+a_EZK\UIOǫ~:~%&y^`ӗ0y|vٌ_i~\fEPmflZ^ZN %AykPܪ?(&ܼu7!jȫ/kh[M ܈[YCp_y{7y3f}Ł:-M fa\6CЛaE[{!{s l $ܖ]GrKXPt[R4((+tvi()ooɳpD=ٗyW4>H@1'.߻aqԹW"_/WM6HXUI'Dub"չ4챺!c@e)(Fm;T&-?8/4H@ߛQ돱`Tq.fEK,3UBtXbFSϏG?i.wa"}&B%"_(T,/&tkJ8Gة(f2e=Qw[yjBJd)JYcb[g<ix-*AA=j[Yj ٮ(_3RZ(@6Qt "IS_+d:xxF(6`J%t!?Ҏǽ$w膓PV,ʕw!&)IR K{*h4 [V)ĶV Qj,9HRY8%'TJlfl 1@IN$*mΆ.tXŋ=Ȟ?{V0/̱y) COwzI7a7x)Z˒DzŲcɲLY 89"dU.kAJVZBZO =y&ص2,SpEdSaMOY3{b|e Уc{|.K01J dc9xցR°8F@-US e(bS̥cpm#6IH+ץ™ `zVFxE斓N`lH\سAz7VG6 P;t0)"f0ű AŰDJvP[A:[CX FvFl0$HiVZ 2g {NRhŭIiPjki&x^e!:v֠UxO4i4+㵦' NJHqd??i%q0㬦 I Ƥk(ͬiPq I`ϴD  ǸdM'G~:ŋ0;^8QS-~ZqU.p^P&2cir9$7Ѽ 6jx|ֽ)UcJJw?-_x:u|wK v~X6oB''ǫ5)ˀIK[frЮ?7tNW 04{0{Dٓ_?\? ̾&F+iܒ e8Σln`<zSrrG8['POtnV#(I+hg}Gw9^o{lsAuX0϶i }i+*X q̏`2n jg dzǃ8<#v3?闟˿ >|ˏtf# GHj:+uZ]Sm5G=-qia\*~84ƒnqxS4a\E\ -Bd6KY՟r͒xZbŧrq_Pj=>h/^2Y5(Ц7Gċt$W w#YG y!c%uJA J@Ne68QC+HomlXF,B^K~` ̃QQӢyCi͜hϤKg$-2!;Tk5F^wy.:]uUavY^?s+6R;n&?W5 ׀3!LEsҐIR>IJw.)PRnd]NKEZ'L M0|PқJeMpI:\} = ]%nmPG'5#yR%B" D%vJ0LZ`9σŹ\R%Z^D\#dx My9I`SE:>7k"z3h}`Zx@xUi`P.% 阦xڝs;JDh.Gu(Nq$K3 h 3Om1%x:[b RY^fE*9㉉EqL֯GCf:4 W%n{bOS({F[[׮EIK[EHkYɭ9)^7_5H@eH)CiH)7) ݪ[dޯѩzTozz#3]3a#!-H+3$0*\ ll`8]fR2FR :&;BNxAfeǬK&Q;%℣!aXdpW6VIR(IJR.(r1L%20k Fv8 %4W mlpӦT}k#Gר َ.dr CCCAL_[OB鹿wvŴޅʕ}^[o1-_܌Gs榣p9i:1Mc@Pj-uƺ/mXu\iêCڰ6E5QJ&g=xKp"ќ%r}ZC6QQK&e6Q,,rpR!8\d3˜; Tb W#g3-뫻wwMj񰔣Wwˢ(BdX[}|Gб5^4g3̴m}~>޾u:pQc["76-pmFmsӥs;="2oE%]Do7v;v3{{s43.yl7zsnޯZ-& -n64i$iS+m{nCao.'P~5=Jo~cap1BLZJʡ L,Y#@̅ƒ@o%r[jR3}f1B&e1k1eyisE㇁EQi]E'gC=7=2 eN"I+0&2\6%;gs}7k26&@۫0Qa8RWzF uU^D-Uzw ~A WuZU!"Z;/ R;Wm"ZJ2˔z/⊨u]\*WKW%aW4K-CSȜ$2>=|XQ~[w6BRb &`ԉ'L>up5h6IwTyC#͉tG\c-q;e(k}0&_%R܌1&c\. x2+xTJ(59:Sb/ɉYlspF˱C󣁑F6ss~lp8֨Wr8S`z> 677*뉇?d!4Yh@8 U, y@V.WV eH)Cgښu_C Hj^n:E2ıs-K[-SV+#n>4.`bLd@=`g}ѓ}Rq>%ی>L:;_rHsZ>2 -Kh5M=0J f-Ђ;H ])p0TZKRS"C )-ӰfHս;pE_9|nBc.KH C-"F j*@&C} 9-8Sv1b2+cSڰDd+62sn Bȸgk==߆`|/byѫkt|-|>fy"ɖ?0qqkxɀh`6$+p.iǂ~'/य़lYj0 su1C5"'LȲH`MU/6>\3s5|6,lcE:)9DXR"5ZI.E/BgxM'Yeϟ\s$k xYz ?}~4)*_G/iBWċۈ!у\Y1G}7 ꑃew+UV8r>\_]Xo5vt7n_]J&xp.`+wG C4v~Lf촙-Ԙ-tAlmV w,S*暬kkP_kCrRWM%5ِ#c*!QtŚVje6*N1Cajۙlwg?+Sbsu1ٰa:ۙ GI>fҲ%[aO_}[z=4vX"5b$Sf|l}2 dKAJLs!t$;#ZS>5t1$N,JM Jh[`g[&S!Ȅ,,f- ƒ!hIqQqȫUf|F N|pl74T?.tB]W45 KR8.YwlRjjIKIMKʃj؉hލdcF֚s%%IC죩S54 i*bmEB9(d(FmlB6v]M)lM`ϐmhk-S=?T C:>tw$fsQXP\ALV'`jӭYhl.F&\yקEw;lߓ^_?w[}?bm,{?CA=gY}R5lԩ6;¨h;DTv"Tn#],3obzs[&,q ޚ< F;$n4D"|h)3}S%K7S-h%ҿ$$p䛉%?YB4%K>T4eK̵%kKiT rqs>I/?~סyԯ3l/Re,$-fE_ 1E,֜mM"Ćq`P7vo&6W5!u3wk^7#z} sk 0ޫɽXy,Rs "F*(L+&//N&K;\X>gǵ+ nN>QWk~6롵ƃ>}ܣ-BbՓc4v \=n G;{1,)-m|/Wuҽc`}9]IGq%tqְx f!H5 pJg4nKϫ3SQJv`\?c/Q%Ð,,BpEق]IȠKc}%99qǏ{3Wk}w3)Lf)̉( yO8+T}~: {92`u`.epΆ5: iw>H|St,op`M [!B$QȁҬ#J9ok7o:7P[aqp5XOzC`-eņZ4Hj ^mWDn☔p9pa>Sƽk\NX VLP nhZbd.x6cۗڹ} yC3 %Y8Z)d(4[*9F ܨ4o]__kv,n r`/t{ܮ;Q`.zh==~:\1yt&5A!X˜iVH!'zQpWֿ \H"m+oj kB+ylE3-9Ԝ  UV,as!T)98 R3bIZЦ 1M7qc#oLU5ZOzusܛ9Ep~UO)ՖN$Qݲj_L`"61!_MC$XGv1zlW9j}K Cfg)ao{(^#v7= =R4~㗯,vwCVS'1 LFI-e+jj^j}%V+W֟x%p 62(y2%"K)B&HeS-y/(Ɔ64lg0ي̜2FBBH)[/i /_W=2н쒇M^7%y|.Cǃ>tAwC@}ʺI HJzO6qH)!ѥTU.=)O{~F,)V8M62@ vu#ɦXHjVubqz Chsrf%^^j,?;t,!1ׄz. ^ۍlaJctC\ɓuƜ}hTJI7l#Wh}u)5ZbL6UIωSlR(n5Y"KjT-ޭ߂Y?__]}mے}wC8mLWD##p0`N<=y"vzM|5=1=?z ]\~RbW3XmKn@8qf{hza/?I~ʀέQ݃a>%6kihĒ3.AAR8gm59I)ۻw4T:uI.'_RU"u|F{~!Mȑ4, H#a_{ejR3-c8pE ~jҨ֢CfA1?=e.% PY=0, DŽ͋6'u$)bPGA⨊bA?_^-IVoGƶRc=kd8rUl M}/.|+?5|L(]m#o4 믷ݦACnS2: 똍/.*8yN| ʂ1[F2׉|Wz;[ZfGjDĵua6os`}.bebJ7:!..b{\\t}"r.)LlUhuM_9MPNC#` YxwC9 bWh0 5J黥cW<ܛ.ƓYv"jJ=Vz) ˄팷 &PQ^C{;TXiweJѯz>> qyq޽VV Hn@2 KBS#s8'̊$RybcQCZ:kaO/Bq!ļbD hEGQ)"Wbw#Чrg܁chm܁Oya=\san'(&cnˋR lϿ&m!!4ɥRK/SR0ퟠ^'kt׮ީZtiDqd&(ꤕqJEH&d{ mJ(sA3| t~&\;:YY?c> qNXXx2S Iﯶ$B48rѲ6MCضB`k5Os~=c~A KMKvS7-D}Wd !ӅPyII bJ8&h)Y>d+N"JNZZՊ fb'' 9 8P@=ȬUt!q!Vz"i@k!hAYrfh\g1i4ߗE_"QvԹf Юq&;y+k8 j&,+:@yPw'o$|f<.Cvx*F|~tE kjU 8eomnI,~0v+V0FޟncqnD$D8vI:4$J!Dΰc47cIJ?u5BvڲuЗ:qs| ,RVE'A)ȡ(>iM(URփM^94,NRv`L,RJt9Թ>=Б9,dBϨW)J)%$SZ)N2A!S\Y ηJ\"cƆ܁YJ9fdS=!AJRgS K4+y5rEsp@6$)E"U:PxeOu>!wNXXצō!oW3"ZIV$0BzB2[#PX (gq˃mG|&0Xo. MO2d^3KD)P}a('ˑ罹ǚ1$& .bAM+ϮAH7CHp!kH!H!)D$J@,j)2h5?&֫~BO frF͏jzR=Ί?ujA+(&左k.',*8)Њs5'ů?|bO~E0~%v:qhaW?8 2x|8:yA ; GgZN^MgGqX?Ah䍣HԞhKKu%VFgyѬJ@Me0!b Z:/ [HU׋y /W?*@G~RXBWGRjjs'>'_iT#YŕwjJu񼴆VKTOU>5Y%f7QkT1Sݶ|mەőb48 >:{oٸ8z-YSֵ&!o|]5lm5լh!>a m~sp>mx9MZ/u%HuϒƅTq|?,z>86-aG)T o~\'A8:}wo=_ߝ~)epooN?g` ̢Ih V0{(p꿿q mU- |6_yM7ȯsLrʭ ŏ?S_.uDx WQ(+6y)]߈VFPʟ B}%KB*md̿ś]eH^!D1&pBp9pQ(&"gJ4؋ i t<5'|JhBKp>qI"QϠ &$AXG2;鴲ө3 {egCttֹVa;OÌk5lͲfo-/ |-DPDV`Ja*Sa(x4>$IvLTKJӍ4A[. V;d$) (lFSEH>j,d DF#uƇJ1.5."P;d$Sd5u& .wZn>ט+!B^tc';^Hʯ6֫"WHLWrX}1O,V_O+sN}檿^U~UG"_kay¦G?r i||SEbOYzRmS>.*YuÉ m)Y5 _Qg__^7 ϋ6AE@m6 m^%'F^67 ||{64hNۯ~}8?,:j񴸨wfhZ VP}Q'2EB+eGr<,U CW՟;w=r@prÁnpppW\ V \eu;JuZj +Afr+cquetRYq~+wE0ttXpTW ޣӾp '濟 \em:\e-RCbW\CbWY]y0puB-W#{z9p%h~-(U;BÍ1 h+;`Xg {1,x;BSWߤ⍄La`FlNjD( `z0.^J^TH^HSl+Ab!ԲclU}|~ЁgMV/[BxQ&tz{VBfڠP|*ooɳ~\\oM>5.~(ݢEeQ|9HKHE*'tҙ2!Ir^0#@^ޫewRIV֕T] G\i 'I ǽ(wgF圾Y&sJa* d'6τ{,%d%u貓MВz~#+'4Zٻ6WuW:@^rpA.O'/v"zSMR6%sHjJ=؋eN8Su}uIDm h}/H9U6t P?WD.^< bEDxr!J1H=3:*艊ڳ-"jkή=,P ֚W䜷U]*mېmvźb ]Rb)V$EL09ErbebZj W0bꤲ+\C6X\8ؚt5ً 56ηnN|# ׶uO WS)y>\PviBɃݥ^ բ7U[1YE)DVcjM gͭǺԂ:mꊇ:# YutƆhu٧QkdfM(ɊOd..fv3%m/]mF |vn; X< njB %= Cwgw蝾?{ {Zƻo/S_?.qCuU˓tuwyatbOnJKn9vcŏx{|x8sFS2荭x8At08K$]rIQUQA_/g7S]ڌ{ #N\v^іQ%=8bEĢ\2&ldKr\"zI޸:>w7\a2ډ¼oDazu|Xy6,F[a70R@ͲfqJs<' g純_Jd,򜎗e};'Get Y͏vP`!P!hKƮ{IՐ^zCν#s .!;^HE+;$^3&JXjڊr{OD֌f pHmA9zqY81T}M.Y R뒔 ]j 9a|K½RUdܾXBq6* :x7XʥZ'/b$\0D%%L[1 /9VG1t^RmWKZi5:'SZ1r VEe.6GSjfPĆGll춤;y~ak߆;@ܳ7mAvkmoTt7S*kL#&^*IQhn*^< 1):em9鰛coUcBn^+(X6PIڦ[r7)u6&(C6,V]^קG_E?x9hz^lUMyp75_|,;|O,x'ջٯHx}o'jӢO9F4ְI~4Umɽ~_ٮBz]%}oOuU6:vR,uX9_=|[l:x`0cp%7S( űQr8cp))1$B !y1jJbt=?Y+#z<.)nn>k/찮ƶpcz?]Ƭ׼э,t `C&Us%| c*zЪD> Ql*"\ `+ B6Q@ŦH1T.R-G(FU]8-x|~Wǎ'\{CyM)Vbb,E?bq6l_|G_BY)P I(a #94#TiώGg+xA C† ^V:b&9Q%oj&*giL$y(}d{"m ǛqͯWKF>:jy:$lg/X%kD,d2Fml,ec+Ū:%2!IJ0Lrѱ(}IXYJm1\*%F|B ǂ7]NUK9_ʐ[\C d5Z1NFc ֔4VqZbM {p #Hk}fɧz=ߦ h!#vOӬ}\":oK>|ᾪ;2YbM>:L1NDrr;J (Uk3O] zFgByfP6I[*.:Bѥ獞nB)B%@XtAt7z6+#Ǜz #* EG^l]5DEA#MV2}l~i]=84ghPOrz3+/ l Bvzx3o b~-w93 ^#Z, .%GvzRT_'✮Oct{On0vmAɀvl3 8׫T?|_I}Z,n?lݦ^{4YF6!Xx;32aߡ?=xw~N>h!9rvwG#(CaYcGn~yY=:fq Z_ڹjط#3aEǦ)ؔ_7SnQÑ3a3*yl4 ͦ3IBv$3X/Dӯü T܇ޯ8ow,2 b8۸_DG_WL>i8]W^X>̮Nޞ}f!EX0˰bϜǜ+Ԣ&zEZ7/6֣v_&10]JMhAyǍ);}Z6~ƴ{sݯ !)yZk35'0@-xTPz$V\ N6g5D_P?>jvu#*8WKA0/#;TV^V"*-H\XG\Mp ^Tr\iwӍ& ]X_xX6~Ҕ-J}ʖ^o+7qV;gL/ȠιRe`LNCF9ZR2_ kgP+q/K!;_!z}.sU=1g*IE:G yG8OK{X\_^஝ d:H΅Ƞ%bhX*Ĕ5Չ ]y]HgYmw5צw O[&JܕLxf5y}7vSstuHN$(Q΃RHDF|&+b%IsGC49w]ɺb's`8sUH<5n׸ N-Ϩ9k)PiLJ|L*9@KEԢbkOlWL)ce1Շ!cU(k'ed\u6n<@i,hP`j8J-1xT (j4d@*Mr c90uP~2*rA!YłsC)%LE@΅S\巊k9+Paa}3!6 }CloIaEc{fژ{9p־"0x3mE܀ox⹷oRzڎmm p֐w[?>4TktH9Lz`qpO%)Y7>ѳyiG9YnUNe՜+F y Ek)>K|`m%9sltW؍C/5xd<^:Bda_.+1 DLhkXɛ8 H9 ;&xd{]qf8ʵ9~h0yt#f܏N1@+s2~s]Ӟ幔\eV&7 LT޺kA/rAR2KJ7ʃ,@#5S) *y!̙\ 렌ֈ,tRSIB$ryBX҂j bphx#y2!_.TG<^BMXOTŝn h̙l)0 [ϜʐL% W6G'9jQ~66s[x8MYɿ!ac24IJ G癶ܢ5%Fg7' ; rbRg䖛BYMJ( ϳl9yw0)*7 rPKvDH-2fΓ_101B:o\F}4,aP)|H*~ 49~/N[?HB ~4V6fkCRh9f2XjNjNrf8 8wɐ 3`d ʋDqa 18!Ve.nlLXycF! L&-Df]%4s*"t͑ A϶<":U%Ҍ:zږy[Fjb-l3=H"Di2h:b`51瞆,M?=K1Vշ̎FKgkxvctN G%d(%d(DkuQJ'l(E{v< c:y(}JGhQVyBH"T9In-79g&sL3^s9C,K6:9;Y-Ң[bC0敤1%p$ OX&_l/bCfbZ%96\cY(5TBi"$%d|%=Ngif"nOOm ~&rauJ8$ӳŬ[Fo< & ÿٶM6i`%8'Ѣ!Һmr(/)!l~"8^/DbxttNQ@ NBg,s@$K7:ǥ+'ݟ Ś/cuzB__?K?K*{z'oMOֺh^L',*8-ٖNJ5_iՍ#y8 2kraa0vtgy@T88^MFD3-Ip !CW0<ӊV,n:MpzCx9Lo 9Q]" \轍—KAi|S3>/'.8pjy6r5Oyy5 \^~T1+ `傪a)rQMC)KneӮӿoiJGr/J8σm?==^\kc웟/~:ê0B%Kh/Fg_gVw'_n >[܋4Du 5]FZ[`D6'z-U|2x:G l}6׺hZ׵UK#qYL9R*4|2iL dv_~_6 U4&~6ϾW":O?,ן͗˧OnREa!c?vUl]eTTߢjapM-kk~Xy(,z y?O,wb(-&Yģ" ] .~`4UMa^D/QYdz*\ƚB \6 KYꍗL"z&vXk! X'͢IQ0/U͌$͐cI>k'mo#aAy-s!>C}C蜲k&Dnضm 휗۔T \nҞGכq˺P<ø>cdzR6|Xkc?0\"7c^kOәH-Ņrϥm-0DQ}eUV 2ƨrT}%-uR[$UR`Dʅf.KEHs "XeITcjTo"~bCI˷狲WӪZg0}LYgҰG/1$1W>U}Ӌ&e09*欵8]4rcJlK"y)[!/3'C1d5!e9I+Q+Ru*fOn\JJfnxlJ#>e 1ZȑJJֆecp^q{ouqߖ6_TKtkk.|.w98lMoeU>ZYD|}*}ѓ̵ɀEmAn6-.:lrEm.znCA\4銞_fOo.^xLgNmғ=YGnzߕ|7(yz2PmfOy ٯ>Sptrv~J׹Q7/G i_sM>.y9nPLU˭d9ͫ]>GlLxj&? %%J"7`ۨ% W'^XY- p'z ~q}Kr;97܍nnǵ 1tXEA 2^}-h]2 {810裱`0Dz 9_F&v7*tLyûxp\ ԁqo5 _If\Ě}lTkn蹲T^rsu]?xᐂɼ?Zm.?sF7u@_,|ѩxhV˱Z_z etRKIr|5I8zJ_DMZJhRLΘ dqW(є Zj-rE/h[ּ_(J.<ȍP8 A' 2)FI$ADQI"!L tj5-oTw9՞Ia.:::RR:ÉH,TD˛H/HiB Ub8Xr gjze}/ӓi[u:5߮AmvKꑌ Kjuց1%N!Zag\ky│Vȶ'N)(WһS'q̀]u \mBWm}'BiYZԝJKW["XW誠eWW%=]5 EW;F#LW;KW0tJvѕ݁lOW/z*OWWUAkd骠#+B0e BW[h;]JT#+mE pu`W `ex5MHy0<`ްjy:\2/[LR/Пo~zr3{4h;GASZSon+4Mh[OӄRBo#MK֧/M`+UghZ23>;BT`BW6{:RLERUlc3>;Bꪠ#+ͬB]`}{w+tUl;]J1ҕm[`+&CW2B)r%]`Ż*p3V^]7 ] خ [!j7.; sn]t5{t= vt 7}{UA{4 z:BB):DWX3*p#ZNW^]#] a+Ul3tEpŁJ,([vV+dlo9~N>dh-] zne%a܎,[:n0/2ګ]ro=rnrJ*C̃7Y9A(:0j:8g :zYɾmβ;dQ++QWWJɤ28ȹr-`3WvZ(,fGw-kAk`!lg+tU{B Jz:B*vi3tJJΨ@I.wD}X1ҕ&Rt,mwbb+tUJh;]}QUIlGUXW誠5tUPX=]]YI`|WN@u ZzW{A)dOWؖ]φăֹ`v+tZu cp7++]^LIfms;CWe]+B+i;]-KGWk]`!uUr*hQ J1ҕ`:DWe+0*h J=]!]Շx;b&O9#Cń@4kOnDS;CSBSzjiiZjѦѕ Pt \*h5 JHWJ6$ M>v{c|r+ZsYjmrKZJL!N-`@Xzn(\^ ])ZPE‘ʯƍNY#&l/X8])ZU|!U4z"R1nZͲ}\o=.:>DJd< *pjw5theJQƣq=sݙ!}:]c\g38iOޙ(AWHW/z@bVDWl Bv_C'Pt] ?xVDW jJᆸR.e:!ҕCwiEt%nLk+%NW‘0rbs`c 'hu!ъhY:M+\kiEiZQQU"M{S ++oRbBW>b (HWHW&DWWCW ׯF])}m!>q J8+˫YhY:])J1G:@ѕ7Sun h#-]#]]%'!ˊ*ܰuh PDW̡3R_o6 p}78QxOcgfay]^:-y9nk+eJK+Ei㑮t3GWDW XjJF^ ]m4nt(HhVv7w5#Lt ̇HW;͊Jzn(hBW@.%#] ]ѯiѱNQW}D`+Z3 צЕ]EH"]%݋,zԕµ1-^])J>N8t9Xf=x0myRPŽ3C_ V|x߀Qל]?../`;#ihm}s?ׯB;/?_F^=EJ#kw{ EOjtA?uf7K6Ecw7hu~캷7V}w@]]^kp9;2Uc87)){gDΞ|}[ϏH%؝i )WLn?z%lm B/sũ&o|_$<\a$ME|yBRsomΌh9E26r%Z=)0JHɢ- 7Qrr90)9 bi k+fL()߅Ĺ3RBN"=Xz B ɌaB4Cc_CB߸R@1qjzLM&A hB0sb;}6djԙk FІDP0NOV6DR)C14 b,cU6FQӐ[Q}HQ"oDЈO& Im{}k&(1W;t@NQ^db)j_%ZHe=|ޜ@U28ؖkCtQ(%5ShR4#HT5ToT#wɇ5ݜBODL;α'_squ~"]ƀjk$K& VCIQ[ %#BR8{h$dCiВi< DNbuȦITчl0ɺhᤥJc)E&6 @vڳdhXh,Q]Z0]~˂zYW(VMCb0.g.fTYgCB Š dcA{BAi!DkܠGЋqiI:c6889h>uU9u1U҉Fgc {L^~ק:y_C|PÅV|nԍ m3bk!0!{Ayrl:+K\ІU&1:ĀbPd/)XX肸az+R|"Q*L&jZcT^[e8˽#`ҥi,tE͐sUx ;P t1puS% 9աQ5ڞ;o:[ l;dzeA4D4X O?X𛼋%Jq VlA+tme,#z,%I.O@^P)wK$80v7D42G"(:AzxWijw ,hT31be@ВaDtR:7:d,E$ fZf2Di 5eH?< B#;M#m#`uB6àmDMqPkRF>+h?tt=iFv@ pf5am@z뭙+ .Bܘ iovC6Hw}>>EGZ%C浴ϡLJ%g[H&#G$o(oV1 /9Caڢ qԌCOW= :H][s uM)68&ӣ+МN|Ns[ O+5vZ R'Fl{&](LÁ8 BVb  tJu~sPqac:|ƛoރ >wqU\Y{8mP zxkaÀz122[/d6O 4%尀h%#p{Zk.!,uJi]0lAb i4 V|>4fs˥:pU.I!"˱XfԤ0WqI/$Z?\C/ҮRWt7WW?kӋ^~:{ 6?nnNonoߜzEV yh~ow~b|Sgo6K77Eh>|y6ndkK 5޵q$B, dc` b l0EjEʱw~UÇ([EÒɜv=]uUw}3;|j sPϹ >R=T0g@Xz@Wj@_ 9:F%e@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJU|Y[_zQpY@RHRBcI DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:^%DפVԣ%J S:F%ZxR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W 9k_[sH%(;w3p )H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@ǣ8*U/~b)K/O@S)y̻tq3U%\jK%D< AA s0B!:F f++X-thM QZ#]) _v 6/Wpfe ͎Ob.N_buHb3o wss@'( 1}S2e|fKkuMEU׻ZX:.҈RK#Kku\BWVT(,gB__9]!Zf&: ++ P ]\v]+| c+i_B+kYB4+_]!`WO­gt3D)՟ Vv:c CW]]ݷVR [V ]!\k+Dm P*戮 +Q ]!\j+Dk| Q:Ftut% cFVDW3W ]!\QM0hcB[+Еbl1].Ɠ.{u>iw iS0x9CQK5~c . z:/i~>NKz@ˢyHj_1g 49O雳Z`b9[*ƧU8{ Cl>uQׇlJIi߼o W>\< zuK7/G&dw>ګGTOVG,ק}gyiE,7j$D`dn u>|t'<[bU,6KX6sN!=u!=v˺- _ F%曨1R"z05mӳhN[H)c7msI(ep&4[}wT&FZj+<3N[;kgFVUGDkXߗN1.-hc \ck+Dx Qzڸ9F29-*+x5tp}57{+D ҕ+_][y=>?e/^}fDiU3U kS ]!\S ]!Z+DIQҕ KQO0p}14DWwz;dt`vknh)PwC{@;Е%osb]sW ]q}+@)$:Bκe7UCWכZ zn(yWHWsLEt%`UCWWW]!Z'NW3Ftutr[]߳l~0>|̈r5v{exM4u=^%µFkvrciKWճfpt(L^BW1j5b@1uMvZ NWJc+ϴ5%p`m5tpe5kWG#+wǮwCƜ]*@;=J]ݷ9g k+D-th }+D)<ҕB(S]!`Ǫd%DWGHW+yEtF/'?6K Iݔ{t]G~"@m?\sBAÄF661Z2F#/K*iccD{b Zw|z9|>u7@ R7850_7v'֧!½r;U-6\LfOnk%Kt6E\1ǥpeQGD;]+s)6͹3cb|@&s?ܖ}+MNš"LUɶqT +naDqXegG`/:;}]eaHM&]??lt#/6f.l*X6Joc:&r>B w45}.I/JYcτŲ J3I:wɰHx>36"jc3i )<<-uz6Xʻ1"x~aUZ\+W17ͣC*9,/?N9d2Kar:/F'o[[`\oXOXW0Uf4xg0ГoW{fض"ȲIb(}Px94S/˭21r(CyK&ld|x6,߬έ:JP`UmXMv p9_up{oj` -Lf 7/7oK^m!)#۾]Zױht8;?>pLJQL2~̇NWA?njWϭvQy )LՕmʾ\k0/3?v'_.NWl;f1Z.;0gxNƯOV{82Φ?]"O_?I~&|5)}sMgU#nf@6g0hs:~;wgt~}WuƋO4kӵy[jզyIe=Fb _ m!`+Bq˅[ 5`. Oi|~ W?ǯՓ}ُa ܁EI0) o~GUOwsuuҩT C˷ɷjqLw<' |&լ .iW6 ͦnZѡCT^-S:T,wK܀oxvR^#leݢOG׽b$Ec$|2Μ&eQ7!"SE8uPgѷA̝KN Cpô< s1CsL`d#;ygq=v.He=x[?Bg $07ytQ6pUϛzuk|}}7(B8cu5K^w`:CBfyp}ƑR)j iju;}cR~G OuTuљdpX JȤc,(ֶhMY/)hq\Z1/EymԘhj-D2nW)7yWH{l@U>TyrsA8)YƬj[R(ax`Ӳb5 "xzʦXDʺd{~DFV0>XI;v&Y-SmL^֓u8W+-&'QAG_Xx++F|S ^ K5Q#76+$IS<# 4aam4xZ,yXSv/od(WSI nd2*/I8*L`AZbBIPdT2Rh`fd4հv|- "R &k]:;͜ck:h2&f .XEQԺK,Lo-f1$%gJٚlIHݔe" (Ӓ=T:Ǥ"Rcjl9{?3کl[9Z>eMN֯nkOuiw|'v}.8#g7GcGx93ڣI^Œ6M+=Pqo çwӾwY, 7WD;ucgMh'Dܺ.Jʣc<勺_tqIfNmZ\-$3I'5K$w'M1 2!*r/lj]D $rʚN"5[}8F BS>|#nmكukor8{VEqH?0^mbQ| D{gDshj<_wg-eĒ,?uPcVY3#KsR4gD/Y"#R"蘕:cY)); "930ZƖm7sٚ"Q !iPb uYDJP%A( ސ ReYwXoZW5J,L=*nTu2[>>=DŽY7|.vg\?g% 3cm?;>w[6u&T"}:X*bGw<:>=lǤcvm0ϯowv^`+_&_~eTY/[x뿾lV[ l_WVMTd7O*l|4'S &|$|d"_I[O sm~n˟=Z^6=6,>]2?]'8 jy?͟:Ͽbu:o&Y.xdǽ.X/1?,f:_>GTS-z [LqFi02@'P5TT'n:G}Ϩơ6ʯI8 t10Q =I|Ksѵ $Rڶf4(rl!rX/o' Ȅ&$fuf2V u/zk)@I2`G0P)Z@Ȏ @QC&mL7P3ҔCŽMş2Id1NeP\b|dQ9DgXD&֤a2ij4acfuHX;G#[9ٺEA9 J˜B1 \`o5Wy^;`IRR) J/r!$u.؅R299ʠcgc'wy:0SW5lW͠ʼn^h@Ba& uM i]69n4e:Ȑ42! %xQVY&"tʬӥE%^Ö!JM%ċl.&=]%ܰz{r\Tm5Fe ?PQ=WA9/79>9d.nt{w΂Cc i X\Iklg JE\S|E瑮{Zڧ !4::SOD ( p(IVll”Τ!]u>U=pa76m6bt&pDH| crT|$%ˢN9h[>] b R2B̚XITF!jd9غ|z󴯚ʹ4O真Cfrr`UVҢ?1R2]/> wHC@љe1tba*+rGSN@:Z۸EWR*Dx[TH1qx5|KdU Tkdl=BqvXJ2B 퀅G'\nVf'ffqc͟h4]n9bK}P9\ne1$v 6[n<7J b38c7~@rh",Ia.ڣ5$hm_O`6Σy{q=i |TrZ3(lScdOZjk[cDl݈xQlY:I \ܺ !sduWjド& )D$(K 9DF<1pl;[Iǡx: a K; !Dя:|6ڋZ/-H@'iFҮv A!y(- 6B;0'raew\'ڣP jCa9 !ICW9&|ыr^|e퓕zGzzɊ$@ؒbL?|D)I:FE٘v(˓sJ&JZ$]G`tNQ4غuu+yuXm5떡 {Ú܍{:=}f3wz|(k7iJdK2DDD9bq yYPnP&ʻ`oAwa^\Mo^*'~;7'oAGͿYqxus[gօߗR?FW=c>e9n}W=6>7kW.6+֕I bg2 c0񵵦6?j"uÊl׉}S|Bb(yrEEd*=(rV[&!R Li3 ϻCD9JPG f \Z 7>g$F21r@&Sz*5}o#C p|<1ryj bA!2:C9ɾc]gמG'Do7ǍA7(wmZW8fohLfwfl4NV#KN=d!GdK" ~A7+] p05QE[xi\t7 &S|/CkQRFqG+`Bc=7 ɻr+N7T<+ZT3kS8LTa}KSߚVcFsw}<`(I[KY6ԣEu+Itզ'k%noI SW4@En;rw(|G,dߡw\N..*9*!s]&$B߷+RԬP̫|$$/;ISiWh7LQJQy#Un6oMMu&{!d[Obe,P뫻+?ȻގWyF~㋕~Nbvdr-(1M޾qѤxM;_6?n(_OnϾ{/ "TȒ#p3@]__9bku8 ͫ|*ugS- cEbvtjg-y̨ )SɭLVYg$SpV(bxYǼn5|..rƦ'TE` ;ϪO2p~vKSxb~1*! o.9j!$61»޺1?7n;tc$͵~Gq`+iGlT0F{FÈ/`! BK(U~uH 쬒g gD!" NIV*^;E|. 1S嬃}VO=f?No#}:Dmw6Ҳz.tGe6\1ьbZ+Ji&pNXŢpE6.%)V.-Ռ$8nd%t\R5-Bυ'xiea Sr.Ia^zmixcczgr֜[ҴC05W6ktdr\ϠtaU ekm.U )ZZo|$R9(3T;0ejYAT~;m 4:!S%wnG 2(UHi䲀xctڗѨ7i@g_Ʈo,]j,L0"0$"zEP8ͼ 1ڍ6)C(IQ j ΄[bc4)أK7#pMPŜ R!hgJh]`C50T_?`ՆɭE=zln=ow< ~X'K̎; S wo*3q2S|ƨJD~9zs'_8>j|RN(dMf#ភWhuDϔV;&^ d Ȃ恊 Gl^IV:JVMKyR^4˒Fs\Vφ#MBoUJw]\'kU9^$e^XNrm$Ia4/mze![%ue ,j)rg +dLre؁x̨< a5ڿ,QI,y[*^jqFQ*!E:xm\G] ~I۾j^GD0y\΁UT˳}?\|[O?xw|c,/b/u7٪,~.~nD fAґ`FxapJL+:ئIi'7?34'y" eGU[iSrq@&3r KeV Z">*gpVKjBJW(Xdpr5MWvW:H\)MxǃG79+<so~r2=긼O˪+s|~V WG89L_x#=d׿߻+0Gss$h.3jzZvG-ANI[pٝ >1p~,Yv1Xxܲ:U⧪ ף쨪`Vhŷ?Iz0v=H J(5ECF_(GT)E-86@Oً͜v>\r0fkv6xxm֛o{=eC?ƕMh! PMfj%]~v5O'+@TlppeL$+lҙG+P++T qe4͉(XdprjmA ճ]1v`O=j)w9TJѱ>hMi`EHW(TpjtcKR:@\1cP5 =R$]ZWO+n) S͒e+T+:o JE{\ S-Mry"ؑ[]YE'ilU229;T:o*vĴ $&+m*Bm{\^W+MU)VQ R P-]|2P BSeB:@\YMi Ke*\;pJi{\=\5-GW}=Ԛ=᪝J1WզEO)DF0#&+Km*BjR:D\1*6$Jn2B7kppũ!&%\`y2B+THqU=WX2'ʘN(.f:_?%ţ:,7;]fD)lL@ѥsRB6}!'s]w-0VaK3Bp{yeJMWnjoF M-`^Z@viVt~jU39 B,!\`PIf*b+Tz\"֦ŧ|f;ʕ&\Z-+TiLĕf7}>PM-F3N;ew:F!pJiJn1(Xc \)S}lq*uCĕɔpLW dpj;Bz6k=?"\qZ b`;v'R+v*yǜy \W=N}\`ʵ6\ZKuq*YCvཟxN N:Pe#{Ru`j+Zdpru2+TÅ[4N=WjKiwAa' 1GF4z7| m(O(Xd0rU2svVw~T* 1}$L+ˉHW U q\&+,e:+(W%cZ%Pqu0 NW(WTpjo J{\ qhr$YE1aryjpÏ?;`F3+u SVW$Rli2jccw߿fXg!P֑>N̙2(R}2xk QTd ZQ⵲JbQ; kl_tO W+$6_bܩw,$OR/?zž~9,筿6FH♃Wq|>w]䛫憊w\@ӭ]]xX߸xvϡ˞>i~?I[GPԀl؇>|^}Rr5ܔ$"enUa{U/y) tOoFݏS] -_n+Rbߧ*㲢ͯ ր2%7擓㒹*\TZZAY6G= d8D!:2DM'Xˁ BUB㉻?#C cvV0Nq. mЩqd4{*@&#Sdr=zs~9{+lR~<A|z;+ߪ_~>?]Pe.(S>tմZt3- 9z]D;;Z42?=Tux| OctNnU1iBFEL XV;SLd*K b:GG d}%'7Q2τ*5|a_͈4 ~&7kS'7|M׎&78?O@L_h},SYCū^H//KqBrG0jp3'V> ~Y |YaYFн=/_,( u.F%3Ā6rX*D,N@]g b+:ב}c''-۽h,#u mv*ܜGCdze2'y&!H|v9嬟#ݳ8ekB̅ ޱJ@*<<|:>[U\l~σ_!d aiqEl6L爐CYPWkF2R/>oc^ghguC![鵚f#8odcf?NY 8(&jI˫$__0u ~,ֺ]\Ee:,p4WvO'.߿Bp*vOK+=>|EjZlfҏ`O+9|US]3owe3Qa [[WMq8Dl}4kز–U+6(?6M]ڃJ0L[ԣǣv<ڷB-SU2}:2vM66z泻GKD`krPp5VG) ;c_ưUgr%-sȩe.9S~7#4/^q ޫ89>_ذı:0?(g^nd)CJ8(Խ3ȰNj;f_")8ǍXb#&Q ?y6(b^dT90N lwE\o ?<miL^(hi]qwp-xsR 8ȵg]6;6܍;u6D"|~~\ɓx?eGsFeGAV|Cuv&chA,\>PIt:uQKqITYgL%D*PB #X5XC:zU߬|D`! eY0U2I'EL dDQBah ;;cA:q+^#*<4Q(e"6ǃ DI.mA1䩖Sz3O>`ݪr~{>Y:,pz+9Չ3I(AC`(2 nEШH 2bP}G ǟ%?蘧> @%ьKtU h43t ;5~աSqt+br}=C%ih< L6|+.YU@k":RCbA&h0 4&΁JY(j)@t8 8ehz7H\ &%t *h„Fe,nX$P[ B£µ}/7s+~̓4h4 nqXLZ 1sکTd, 3eKCv8f+؈٠vĄ m)];-6La<]L:ڦ66$"fu@(Gw>*b5pp!$x>ȲuTy0 R^: ab V&,Ssa1vvکlb6bǡE܊#$ё#@FcӢkI ,B@8JMJS`)lRͣ=i&4w ;-UO h*,&%/B+fz]NQ85`H;wJH뤗*h09!L^"&..LtjEa{G0a ͝2*- ~|GI~\~uG*iAEEЕ@KZhlRj}cfR8ھ3^y%6R ~~t8J n`Z$b3Dt[.JYŜ`{q.M:]IC}*o1%V^/Uz.wkuKq NEDL)eXn4\zzgB 9lI\O\ݓYdޅy'Fqf>߳hk^v\zvi9ZP 2L ٶۥ9>'NOX&fNOaNσx>u;UErAlҒ3EG+MtTRodF L8똶LƘWJh,ߓdxfuA9ڂJRp1BIt10Z8_=G7c: =H(t94 є*a1rB@zu/* 0hUhީ̹ZADYIEEH$q QQk'RHBWq89)'Q')XJ{cgO99!MaTu4˼9,˻Dϥ.4DpF6sZ|.r%TIn<|9q-}E̻礵gw]f_?r1P!3:D43,"U\uꇈf.=BapzrMUDCv]`+,lr炥wIë[N(GIőQr UQ0Ʃ2j i3hTjAyI@CwТkGcgvJdϿך%?qzEGr:޶L<ew7zVY\5m9ﭓfn:i0$dT:2QKM9{NB'lMDzyKehRTY먋\84*;lH!Lj͘;TIzE٭qdH]˓V~5ϝ7nn ==K4ZU~v\8k7.nKH@gF[A83XՏ'GW,.1_ܶk( ,pNc=A848#^bY |N tX#~!tikQe۰WwRoMe{X9~*AJΘHJΤ7iE4J4.{.T-Q,B姕z3 )@ܣWYR+"\RT"WJtM&x,.2Zv! 4#؅aڠ5g3%CJ,!hq6 ٣-f twsg|;G$vh5ʶ[l'#[lymoeU7<#wk :qoD9 W| Ĺ O3-Boz:MOoy>(IiL:#i436 V*%-O˓,_ڟjdqb8.rS(}e%H@35:%+,cCkO%LŴ$Oʺ-:tv_Q8d0Ҟv3 Bx 6zNoT܌C{h>=E,EVycpB҆t S<& ?䀑bi}{0ٽ ^k޷vgNt`6VMd%_zO&NR{B[0̄mGbk?Q{7h$ۃuաnPϓ:߇:RWP{ ZCCW!Cه>R&Lu]"dEW%sߚ(5)a"$u BIHQuQQ5]i6i/9U0ouBgot"UmOY iwϙ g;+3Ib<_d2/O)Pxqv3ќ]=mq^ٻ6n,ӵ9|4}૭'^CgZrbَWhk吜Jr0ImAE"5Զ(n|l ż>uohaη6be/Dtq1lzzḅ7 Wh̼vJꒊE֢`JXT\aIșLz`,pyag^C::(wG71j"3ŀ[4~K3AN$} A':&3(N^YT ]l:c ]lŦ}Ҁ55rxгw8P{_>h oeC}h5]& m^]o瓼MUjqVmVۯ[jExbO1}r:͕!uPwܚ$M#2~Vu_7V'% j_x)w2ؑeBȪNcV2[-o,2洼'pR*"Q{&"*Z9-s9Nhe|܊Sx=r3b_)~-2#sV>($" s^zp:dT*X 9J|K^ ?HngÒBaB&BTm̑Y1`L{br8Gcutt!Jt,A/gec 0I4Y!{A (r 7i|[%#t⸙xt8UgE-c&ӿ<@eG2ѣbp@Y@r)An-G)s*0s`KxɹFH0. Rp:9>X,ף/iĆ@BZO)/yG"bBv<򩉑M*Dk2G[1u2'"a6 7:qcAF t2*X4dJPEw/yi:N4w),a@vrpӳŬ71Yla@ 7Y|xLL? 9?S&mMRQ38'bX}%Jyw3 H_(He 9 q&:ONSYb2&I"sh!d GdJO#kclBOǕwC ]H>Mտ,́V=dM o%\n8OXtpXBrJ7߿N8J~HU{SZC>gL&ގ@/Gq~!|_>i'#RgV"> @@0K}73qvA l?61'j$zp]" 4 H$7~y&Hv||T8(/ rbhӜN|Hs(ElkG5XL2jE].̾R%7rhK'NhHӣ+Uƥy_]Sۺ7/GkksDח'Wh%]s~d>^H5,J||o9iJ2'kmORٺnn kYew$1'~,XvbgW}F ^9 nF^g%;-dt}6Y&*jC5/5xK5/U;ћ߼}S~?oǣ}}״f"בrk~ڎuݵUߢk>]cn'rrk\|tM1; - !W?\8~3s/rrkE<*q+D6yz)Xq1" |23B<*:#Y *s*^L1X쌜bl!d8 ʾP]缳TnNZzsѯ>^E-ء`cv&k!®`%j%@S*W)X_N V쐺"; rgUV龫BVzꊶ~`zU!XQW\%vE]j*T8QWYtk,{5s'-s}E:~Nima2lr%G|jT^q7i 5;8rbfzsߢ֩FR ijxq].Ҧ魣f\O>m7O2tQ-_07NG>A0Q5lU)ڝ|U>&o!1QP=W7J0T220[{m>HM­VpXV>:e | 2Zzkg *PH[-XI1,B{O\Y46Zk鬮.]KiZYRPjXYuf6{RRX[dSr6ʭ˪׍P}u5ezra\,=FSJW/?/?N.M#-)DU+`9sa4 շ7hvޥBrPΓV26ģe]YBܝ{BڙRZ{uU\Qz9J[f.nL쎺**wuUKTW@kC6RW\л ^|J9X94 3xFVn:+lIJXɫ&ԧ7 G9l%:S\;V&Zh5!m#>yoq9Tw[[R:w4}FH\:VCғo܅+,j2Qzyv`KIm$9}.^annf=bg&eeg&?VKrDm+NۖN.$dJ!gT$ЊIL礼85`G5zyAѫRUIΛ}KN]#RJTBS]&TS)7r߻G#sBiw+TuUe1猪B Tts( yGMD"EBiJ\\bmU0,t%E1VFN \pLh}؏M:W5wǭ:WnOUljw*g3-Cx{t2V!ًMƬs [[鈡*(6r5QX윪]"2r o0D0Fg# 0xkDҢ GfйUVqdžN) o #Uz~; lY[Mo,}nv˔=4$Ӥk2.nԮ";ݢStD9wZK!qjB_)tw ;tu#-vѳHW~<%hMpR}F -oFW}Nc:]`e |Ԁ>Py)[f n\h-S NRmn;MWG1;,?[2+ lȘ*!%l59C qV{]̨"RJ-$DE(BŦH1T.,{DT582Uz||qvvY=[ۃlިŮ,˧, yFUV=*j=ϦFmÖh߫5,}UX-70۫*SF( b{ 9޹6q+ъ^ٯǗGy/Vjp8)cװih!zyr@U=V:b6X8'5MY1s> n~S'`'|{w=vߗg]a,_f;:b)̉w1&9P˺ZI#x..$zE$y^)AUNPH&iop.EMxqJA*?O]/0]4"1}>j.\Qc<Y's\}p H+!Pl!!Z|r B  Ca6?Kx.3Pa#cmϬܽj+z t#ugP-fw#A~f(2_w b`E\V.O{|KOG0Ji~|]m|`V&p;ɘ 6t$Z y}?2ڜ_M؃bղa7$nLޛzo.ZNieu 6H89FSMGr/,j^^HGg#gpd{ȼPY{T"*-N."r#[˫}2mܳf=P̈z˪g\ISq'*MPfliax4ԌYĸOΉLB睆 jjƼ?ܻaGԌy OlnY|X IEփW$CZ` dP)!qS{g/@IEZvu \Rc1՜f R FCn%}j4Ȕ _sZoڦEoIT-gQ0*`-d]UZ"BS*qAF㉍sG=̹}T>WkBBHB692x j,g51Šx~Qը=jahBȥUAbt^a5AiVKFQR<9"b#hg=_lýP[XϪ@^WSKTډS*YS`)Cv3A1ɵ^6Sq)L,0& );'e1& ,VWgjبJuM**T5LxVN8B:"Bb1ZLUbaR&CQ)VǺTd 4'1ثhG{=[*n x'K @0HQ63A[a☲bie# ډnWk$|=oi [?waZ"NRUZ^t>U+Be#Z&=.B02wZ5œUdH(HD ;Y$L&Olb0R:jÛG @L1hUw&W}X{ދM8QD+7Ր䊨ܮ$cjTP99Խv1qRI[xY8B2I& CX,9lJBgJt\=>{{8Zn@W|4&(ԗT!!T#&ծEwr As.trh?ٶ]$v9w& IzM`l +- T8mxC 侔{}[eـi?= 'dԄ!Vs5zQf Akp0iښkkڅLqb)jFH(JWYMEQXj!*H2}|~^Q""LIe*(틉)H"PgCkw?\ZfZUf5j_rϵwh3WNCYpx+AbşUk`xئy5~ ɛ|Wf+?fѡ#*C_MOy= *<h>Ehh@+<8WoKXS>pxzPNvo*UbRp,Iy*0A8o4 *˾kKEY_{av{!ҏ_wO+xzpR2J+hMZNd5XG(}Hrי23FϖÛkM>.,huMsdZpix^O-Vk}ՄB~׋{Rfg2vLC4^i6ga\g|\xl~aЋs_pgEuuY%kg]β摾f]jQfgu!lo]62&w x=KGB"͏?O?ן|q$B܇Rf uv/A\atwu˼Hlk߯zoIОdp^ ͫ?ziSE6,i[;nM[ZZֶS| ^jiq\.觛wY,e(n6Q-U,q!\'8WHl]lh}p*^` !V.G6𒉄c{U7X;ll$~T Wn#YG D9ڔBN(C5b!8QYDgB6^FΆpzK0y0*jzibZ3'BOԂ,I >L1*1^VӉ/$ɧ{蒬V i;OhQ=mͦcrT*eM}C5nSјh.$ZfyJ3v0//6|rNI.;NL3/h9I;'iOEqlJ9IKB"+'̢%&x.0)FfCbU%.{["KЙ!z,!2Ky` ̕b5qirʿ2\ckw{tuv)gpt MZ!1_}"Ɇ؟Mhqh(0ذ~j6lr- ooӪvs-4*3ti2BS{>6 rs W߽L~\lq7żrh%fkü3VfߟW.GeKZca݋š6,ug)(UMJT9cd|(" Hpv]KIC 7[0Մd}4(e?z Z;Fr={dd%G. ]aֳAJd:zMAM,&b3L|t4H=rɠtј7=fg=wt -ۻ6z$-n!m]vTHYk0o n]* /mAk|YϺo^ٯ7bL..ȣ %)ᅯ78q=NZ+Q()S+Fi^i&IxHgzwG%O[2eǭ}3'cA밡B}%-e\8xOǶtn¿AV~ Z Zl^~>jmu9?xE-dցK?v7Q&ѹL-o_<6n^z5|7.MW/ $Yn>5jH،ˬRdanJK81 !wN񍊴?2\wAg9?}} ]:a(>ɬK-{Kv\JQ66@ qڻ&k2 @4 =OK%B/f d2Gc֥pqI;eO#K)â%uVIĻaRհT ),+e:C%76f)[6Lڽëp؀}J%.#d0kssu2^x3^IWiw@1b*]Y7tEyfg5e5lLhzXw*҉2&N}YDfbG -YY vj5d$sqzϦN]v\(8t>y=ewf -ϗWcZt b6Φoӹ+-^ũ۸Zl1![!6&z@ȼ |2z=zqbslsp9KGaA..'% PNfDpR,[GWXo-{MD]*.){kϺIqtsxYc?Q9CYEM9k.M5{]!yqp\x<2[xQ;|\f]c{{~hZ_q>L\OoL-&\Oo-.lz\͖tQpzQ[UrK:v= SYi|1kv_\wKm2?1m{V!󗳛ݗۈtfCg6sm+N~<=vn}^^mЫgG \[.zGMj.q꾵6_v:szgynom3xڞӦvej/|^7aA 'uB̜{~f~zc5 =vgeCGJl[;J%8O. J)/Mf}uk_Fz,2#::' x,0%kFZ ^uVM#*WxnI_{l)\Jzq+lW,W1&ADntH+dhzqeP'^ᅬ)HfʡVL1GOhB{Qx!iHY2SAe.PW춊h"R&U΁E2+g'OB9)jf9h9KO*%ʱ(BL() NX{96M]'=\}{@7;wD=iEI* xRF4!EI2 [q9@N*[I+^~E1AIbTZ:( `&֮[K'j:[h|ZKOl|Jj4>U/Fm'PrP@NګC4NM:bH$4G+%e >@څȨɈ9V*fAXf%ڛ DŽIIHLUYicY 9˜#D4J$l$z$2+"ȨNxXM5Kt\N}Ajq("ʈDq'*t mI%+'Mi:}rƴ]ֆK4 r!ZHdlr#,iFz!ʈXMD:.5+묦%] DTzlwzk.Pr,q%.弊JȂR IMHQ'RRwҎC*< a {D*p+ n~|G,℺[>E\mNeؽ8ݲH+nɕUZ+Y{,ZP N~qQ,h 4 kJ1Vq-^!Kzva7>Uv֗wR;Ǜi!Tp{ܭ{J?qy-dZJ );ϲ3N "y+td'oiܖO,s]RCwhw8_sguz5 ANh-1;\A) K2j .z5x3:\7ae}ΜiNRĕp*"-{\"9H0'WE`OZqݩI "%\}p2.Otઈk٩IKdpUlY5 v=%~= rӺD&g/G֌힐 0bꥒ/iF7uw*hQ #_*<[g껇?5SiXiiރ4I brk}HA-m.hi9gf4¥'&$'B@{ @_!{IJ?_˛Y9KU[z?ofח Ѳ4VZ<)xA-w{Jᤵ[oKd3wt]+I}\k[l ܣiFMm <cêz@zZhJsw>`X~*aZKLCe* 9Wu,mR esJ\ZxGȽPry0vyޗ$Ps'4g?&|KEDzaJ3`g`Xp^f3VAqϔ`Hbut8| [4I.NC}ydR-(!(EII!C+!!g0L I!J_+wh )ҿQy3 ܄gv#9zj&/7 ڰ^z%,*4/k hZ4YVfS'NdԎzLh-DscEF}{wsI+pb)Uf뀬-PR>JIJj1в' 40эcT*ИU6T3F+}RU=QKʇ<_{Dv~;ێ[-4LQJ݃66);eXDieѥ tFa-B1U6F 2{*tQN!yD!8<GZ]F\RHO4[V1\NG:SAd*X !IBާO͝4e7-GpNTG*j$7.(]cpπVwF8'֤@{ur38:Ahkc0mF. 3!9q{bB.聼0Q9 _ E*UFG>rN rSNG3PQCm>+h-amm߭83 ALX*|_A2&/VA-#1ײ ,Fs V4oαsA)9x X5lGD&0VN ddR!}E{l9K, |FEEt(MUlA1n2NF>ΫmO J+Y1v|э!ՠPw^KCp2P(SPzNB€(!2] 4HAjTy>:LAZ ӈ /|x!iX*F=F~pvM&!jhX #&gAuP<9utl<+MT$]; vC4*d`fPDdn.Y. Ao%CJ$JET2R,zGpKR@D],A ?r7ǩ ]e7؀~nz{d?:4cګ+,Eo`.5BgekOak0="4^]f6fY#e-6n(ǧ-7\}YeF٤#vàDy ֵؓJC6G=T3ʍڛP;:(jVd*! %t x?B3 A)>֡7q1옍}u`V$5sJmhB $\; 9)I*[0jH c,1@Z*(.FR<۝oYh\SclciЏIh j hN\76xk+FnQ[Hk֬UVm(5 Rl:L|چj='е-9-Fw*6LLo/5@RMwhUdPi»*f`)o"a2نf@2~.6O 4%鰀U2'Yk9y?bt~֘-jUef4&XTdǢhBM:Sj/")wSJ,;wR@õ1Z Y\*y׹]Ot"BPQ>J+e?0 ՠL[ >kq'INҔ+ w\roޏZB) MjcjQb=w#ӻo.F'dk#-Dnr\zZoqsV0N~+~{xFQE9&iW\6pn=X}=ig0\e=GpJ WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\E W(uZ*)j W@kj*t^ WhkCbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J W/ibд"i5+i-+%x܊Y1\;1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%5\qM+{c\оa3r+fpu@+Rp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wbլ;Ⱥ:oxE/з= w߿[{Qz׆tvM0j=0 a50Ma0O;oA~=ܒs6jss'pàs6uMcT\I#SBeHz9)Н|.&*L9Y5Q>t&_Cv ! 5YkD9)6&*e BFk^LoP4 {ED{*)j& Jyx&ZP.AOyNoԄAl?jQ]Tl0ה!ِ %]V44gQ3 yoFݾ2 -{ب7WHIe`>>wnip%N*ݫǯ߼ Bݛ޵6+׿"h,@|Iv,93^`{-ɖm$˔-4QU,v@ keo/W?ʵ7}|/wv%@w%,5. /efYh]5V@1tX[ PID8cQVj+")E 9rBCVF M$|(Xf*Hl{"&]|Cf ] &>=*C5M9i-nâ5 EUS jq:FSx}otN1ƨ9 GO*ʱO,DL9"I(1vF16Kazf={X[*t}*h˳6{t;+CO",N '==Hǯw7D==,?O( '~L !|R0XKktyz$'QLhq{q b>TM||CPv0~HqI$ _l^ -Mk|9aM1aH}l5u?w:Y |g|6 $d"^=gs <-~+zb]nM>;ow@"mgfTӁy 6|vsYG޻_GˤVa?f8N1mo׳ͧ k&|(gS*DUZ[RzǓ@eL9s%V:ƵY#rd}]D4Mo#r˸< ~7&en׿F-B~1 a t^4R˸UoCnzw_ϓ klaScIxB*O~*zMVu[3'o<7uY]w;0j磛={е-(,Ǜ_Ԥ5gw.-;Q= vsJToUgqI- xqs Nsc QFJoi7Π{jw!mvHuq&hH{"uCǷ8N+ =$! Z #,3͸%*jtLULJ>%2E]J%Gt\!ɜ&)AW֜;!<҉{]Sai[na}_:~C=. ;ejſ I*,2`vAJ̤̼NFCU T9\SеVϊbxH)H %\hfB@֜>fF4$&o_l.~?y 6齗<W׭û=b^OG]PRV%'mY(n&WX*bs:_vc[F9*HI{Qe$Ce$gͣB 4H18üGvX59ƌJ[[@Г VB)9KoR\ _@62Vk%pոJ=,b“b]ʽ-f|I˃mp vy_xu1с"f*p`NXGäS$qtD5x;sw>s:}n%dv^¼qR3w;+V=:PiNEق"hxj-{ΣwRo:-?Ēќs!H9(Z;(dHCb 8Fdn 5:(t&GAst2z>$ŔVs*[d@Z&)Y(g-ucZ͜gHD *֞X9 ~ $.sR Rō_M ekR5;ָ(GeO~>6'G1ODe<6RF60 X,";G` 5E%r2^!l7Ƽ7#?ܗ׳ͪOof mO|e1\M7kzcul=ãe9fk)@,AE'PBq Q-MքlCn|b*#vtTђ̉ɇPO?sۭAg6hmVxxVs0M$ Z-5BБCaBSx0 ==|wȓr1@y+0&y)aHc n\J`lUF%JA9Del$ޗ%ȬQC :I{:g'Z'4oemբ,;o!CG PоF=Vv#@:- Yiob5u:lPE?4 4}y߼f\d4>.HGC(=\C]Cr }XLglzu[qgw"WŐrneԾ-?=,P8sI!5iLn)OLnqNHˢ:bs>UfS)|F 7V:Sظ^q}LsVW*Q.C]zivUF!EY4OJ-%3J_ cZ3N3\$rcݸw滱2|c3e;ZgNM{l>{slfsL!S!4RF{%5hnL g( h!K!$Xd4 23_P ?F(U-~ G5088k /y+d'k$|po)RcUp(AFp5 ؒ8_UWW<VE#2tVNa6J xc(ڨ -˪=?}Pj'_鶴eۗ-p+C7R J J;LڸA#BɧRJ/ )P*L7WwuMZYfRDK7٨tNsV+&qbDė( NQPo tĵʧ;]񻩬g '"EdR6pJ`RՎD(H;nCq+szu Ӝv꾂nݷkbn[zulW8ۮb <&E CHT(K+ ԊSgdM98v+:t(w2ibk*`b'#HAƜx<HMY)V+J8ZS'HJ -vRuPFk!(T9329X[B,c$^8e)J5N9,C;tj&;Y+L/˄l&΋WS3OK9x!_}~pM mn_ /rH悡IJrJ$kHG hky Ɠ`SRlڃZa6NZnn:;ćA{@$ DD $<U: DdhEpt*,51,vR= ?(Pj3(Ad&*As5,x"b1񠸦NE@Dnk#'P`\iW6_k,[=Fjc-m["4p\hu)iLydΰgxoa|yՙ;h˻E'|kpaja'R$&l`E<9w8AzR xR2,ұT wu|In]M?N?.Ɠ'lzv>n0􏜧 U/9ủQkRTz)ᾨ(S#eU(*M}e"6,YwU1 σ>o. (Y߶֙$/&>[b#ˆ~b _uGMl0jhyZ\V;d4P0O\~"+mLkrxP 7WY^NƟQ.O ӻw_z5ѩ9&7l)^N{vL,RKu<k;tÉrH mBNA\*h"6 )Ք / F4W5\5u18q{cࠜK$D))ZEFt[lyj)Mt"DlpͲQ<ʯK93DbD;Tً &mp!9HTǘ"QR #"6y <$CXi_WЫRMl$ʂD}t"!h$^I6|d`6}HQ ӂΣS8,7Hp …HQD`+nt4Q0)>z} 5?jkQ-•=)&q?左k] /0sTU^o\OP,ΫWbnJ ~>8)Ђ JFvQOFP@cr.Vx8p)\tei'R@xԁhKKr VGg`4Su]X8/3\"Uj7-pZi М, l lj@JkZ=G=q&0J.Iqy> ?+OՃoǫ.1Jrm\4sܮ8Gx< 1rr0z/Y3v$!b0l0ì2GܴFJ> \WDO\ f^N2y%׍]4R,G쑪<ϟǓO"Cz@q}7_T *_qȎ@qtw?<̜ߢ *u$d~z /?n=qfhn6C6z1[hfܻkSTq۷`r拫&R:Z-A\Y\b ٸ/.*|*+^R.b%B ;ӥYs $I$:t(W7.GD+"&`DKwJFJB@8rXmuYBUhc6Rc(P)!|D=2:#Z 2@ R甕@T< [#a]wCX+s 7k5DfzT+U-fQpa\YF4 !(gajڨ%EL X)8R$X:m1Xy$Cۑ K3- f|Ti;ٟN45CGLkKl"rCi GcBMJ"-xkBc!S9oZhh0}.Ap{5>|+38PF eE;vP7\9mnei42Zn˴w!׆ ׽1>hZll׸lP+(^55ZCmTi^ RY9{6H՜WG{6 /R\#b"S Ad+!)J(SʉN]zo־ce W8罥Behus7O䐬% Lk'o-,k"cPQߋBZϣr"4/. 2Y n ;knΦNuiU<ޫ5MH5F\ Mzq6x[o!'ܒ5on{n6/y80Npp`!?xÙ| Z~f-frޗV d*Ҳ}:ׅB\YٛsL-]WJN{qՈ+2գC%ھzTzLv ]G+ۋ] F#qW4g<qZź.Đ^\Aqń]=`cވ+$/ ﺸB*z 5/2Z/*73zJ05XY9\ R7\.R^+')Ra`bDI~DxG_~>guӈ9Y|G~O@woɵ'1- vP%ޠJVJu1Tj{1Ŵd,qɥb_UwT-+EGlqd_UVꮋLfzJ+T2BrwmI_!.y~?w{I`bla׹6ɐlo,J#cj_ršb1f^^=ۧcŠcoZ<Ty %O'üEۯ7|^)':X.ۛ}eo~0oˍ *W4+5^ɜ\@l᥷E`^'V;r{զSFTv{YkZ=hb$VKQc2^k*%&O7kL) jAѻ{QSӧU[M6s *`Zc`B$[$dZTҨQ_8IRj#q}_= Gl>gBXŵ}MM53 W, Z{mY8i5Et%%nD( PRj::ARjiZDWPBSkt(uWwtMf=5W Prb;:A2V/ɻ%`՞">k[Ss h =">Dɻ"hMsBej+J::pvhaV#&ǵ$[jWS¹-+lLk xЊ %JF;:AbDjZDWXժ-th-?vN81Z6}/ճUsWX2NWNg`ȶbXOmLKV﷤B:6Imi@=vFSiAiSTm ]!\BW=]JIIGW'HWZFT3Bn ]!\EBW񣟳C{pte@=m]`M]!\ޚD{svvHW)u ֞Xe[ >wT]yCW9yWJLW[Vr;[e@tJvdSl b]*zʙREt5խ+[CW@[oRN,ƟNAG@M \bSZP14Rc WF+%R]ޜMC$n}x1QIx̛7_%ÝElP`:ߍg󉛿+ʱ4{QbdwPyZ K Km\/]/z{Xs8c¸lyL@/꬜]<_":J)ߎq/s]Y0 gp2X >~WJl #Ϗ2xtnIQdyoN[ P |o 5QR+|S2DC}I_ksC&I!DQR_P_dv޳MvY'Ifo4kzGb]-hqm(s!%U-scDts'8GɬPɠp!pXCOG="%ڰR%SJJRgҌRyZq36K*Vs[,N [\);+EUNyyxxFr@_X̎j7gYA4vx[0E(MѿG7wVq#ܕx7/_w-XIbh/i|#曆Rxm*]3&$r㶉ˤ֮4~ANhmOTk唡N.52#Sz#DCpM~ y4OgA~\zj-x/Wε$E[lҶ̆ڃJn ō+_+8)ޛQ((¢6\fZ8z~G sPwe~~=i=ȆUy}P">8OHqo%}RN(j>;\7{?L3W WV 6QzbrRik֚G8r'')Jȵr}l*KtBAuB8Q+ *.'pBI!٤OAp,IuF]ӷ+yge}ZEڪ!p^Ŭ q) *U A#[ܖR0sݻ>[5LeBPZڕ℈4Ute+X[ Q ҕ"BBl!Vc+DeGW'HWZ "DWش'Bִ-G]!JN;:A2RV}-oOtpvh?vB;:!Lߖe_`.Mgp|w^]\c ||=*nZVc?-hvBd i,3e8cbul:s-]n.;,$*OF?/KK\֕yuw;?΀Dgu €~Z|8 lY,M9e0 !>*FeIOQ)Za¥,,Hhi}JCBPS^ʿ_}lQTѠgը&ߖ-.Wp~(WLw7 \لwߜহ *SVBP,]xMe<bhJxsMKL0] 8¸X⢿W!_zKRy\jZgܽ4膃x%eyƒp_ozרt0㣫m~Bխ}iǿY*m=~MS;,Cؗb?$MʸV5C[ojuKa:(,Ge`Då;흒Є$d +M \$gI4rE M,1]_ä 1{T}tD}2@ <8Sϸ(bSV26Jo ?d.yy}t%ayhX 3d;|o_;yXyh́M? v{U.u;CJnuUmh C8/|v77->Ɓ0f_.iVq!3|K$~ܒ֏+L_dί҂?񬖝Se)S甮*{mOށ0ڍ:,Ύ./?9[G7ae@pΑ.R8OS쯾; n8C7竺9W ߚ:_?T۾Uk4n>̀G)}nj1}__Q2ÆY4O՚~$_=-&ٲLSIB˞B P抛2OeEULLuѾ=^D)| FcF7@T,i!wZqlQr'qvt945ܓQBBeGH"FjS"g!QLXHyəQi%^erIZX4KRB8)0Dy'bPO)`xxYir+ MYCVyDnnO=t99𥫼:uC.zr "g飌\0#sP[O$i 8H(ty&rd ;V,&MRZYtjˬ0C^{s6]pxf3Gq댃"'jDFoh2i3BX F<(SQk&@Aegm͈i X10e{?19@"4piEj3&5#ryEtѠy"':Ґ]6l'AfX^hsCpIQ^R,/YGG\{|]zoIH`ɪ +69P 5mC&.KӋۿ6TqN;Ydy!b:ÿ<'}W*@)5(@I( *sJiP Bn-3B!0!2ᱬ(5WJWD# '76P%LLB8ĔhIMGMy<*ͦ4JNE=bբD p<.ySDhx4Go ٱ@bg5֒Gl\1hMoJ3'<Ϲ0ŽqV;c&;w=EeoMZ8W! pF5! ;"gDݜ(/⨾$rUQ F{N(OT6Ki&7 E緙x$ok 1,`dq> l-Dk{o_-?_]?\|TQ#4 =fuݭpȶSSgBԊ6F=q.¨&qz35EA<𿵋ɫw/ׁ}bMĚKi8NịfmYsd0B|%rqGqiQyH 6t5 F9:MxR˧ISc7kxt.l5r829EGed$F횫ZrY2=>L׳8A&͝qSt*'t67~xuGox߽ǏzR߾ϗo_5pM.4<_Fy˟P(5453.9ł1.%1cJ/Un83wp'M{Wx䃖\KN+H6neY袦w2MEWRt.UuY8r}tM|Dj#m$mA4 :FbNe9R"D1v&J/%e9H_Њ6iB^%`Qb Jx/ w "%1E Ƃ -I ֑Ξt*O|%a<;:`)lqv8aܲ٧^)(}F;l{x,]Rb vx>}##-[iw:dz0_ai K@H\kJ=!&Hῼ3&H|2BdvPsLna &rk59zj Q@q@:gMe4} SR%+H :WiMnKRҞ ՑGCGee^$v rDInFi e*cx&shTDܕ1zq.@N%eL*$ [{ZĜSU"<bȡnI>Ojy֝yVB_:;Gaf}z֫TϷ b>!*kBLᰚ\T ejY e*,X8䪁E|y=ͯFJX^ Q+%\ڪJNKqd [зk Bp& A0Hrx (A-Q-b1J&ijb (w[ %QMnp2'-l #ݔļvQ >r#`521梒%MΥ _'oČ4ʻ6`|џ ~Gum&-hMLt6jM}^đə[γr-k|ZOM-¶Jy*pVY<<1G=ɇ[t>Mwd ,󋿶! Tqg zh_$AC`iq…7'd% 'o-,ky&cRQBY-Jԭ

<y+|uGǍۼzYodvm}VlT|Ál1JÁt~# (by(ŵ,3rPͧWTgZ޹*F'2J@fVDq_IO7·DN!ޝ!13>X o*=AGTZZBiHތ>A%/Dn^Kq Uj]Qm$йRe',,vCCzй߃gV{3圌4I[<-O*1Y4RRDA$%4EqwRYkWGvc=eP6gFGυV()5; D)Q P)^XNqO:/~Ų#=4 =C)ّmh' ̪Ç.$UZ*A NV9Ǥq=Ao*ջ~"zTL"_ZF#WDKD"&;('HB\ d, Á;Dv?ۃHԐO'.AYb,DGUqOBf %("Q1[fp7{G:(r *qQ$߂h>M\k҉_ډ_=Q־[qz7cKɁcQI̱BJBϿLz&w猗QZJHTX$ s`JÝbQZ^JJFU !9G[j EOkB2N A\R/\@d,FnX8PY ByGnͯ32>LRp-_0o^AeGp6%6`,G);Fd)X̓H ϘUVF֘qtj'eƞEceɥʅE$b#fJhmk e._9%Ú];ں'}B8^%FLLABpP\*].H$iA-dDЊy!tB%Yd#9G\d$!{,C(,a"`'q UPw~ȏJcE?尿޵5xyY T(I썭THMXco B[tr>;Pp _ j=ٳFcqQּK\cIMp <~l~l3+HR[I=I1ΊD\+Ε`< 2i -kAÊ$!\΢lP IQ9/SRXˣ Ja3~6f։W3̫=*/nI>xrGS?=RJ} Hfu(F6/Fd io ~.WYsx3~^_'\!~SLoD/9e4>\IVR=JVp=q_Ԕ!@(=Jzz9K<kyq:x? UD墭B|i.]m|f6XeC[}9烟ocMO~>4At{ƪOAؙ5yЮF*MY_p__Q 9< “ B囕 `! QF2]r*K-5pVZ9;By'41%es \VfhG6V9E덜 ]X5Fj92cK>~_TO''u5+vҾ"t7ξy/͢OqD-2LVFbK(0$DJ1`@rHx&=kg@Wks|괋tڍUs^Fy5s3m 57ΚmOoȍin.7ՖMf. lM1ΰ=&0Nj6Ld:m#;N&o@ f^7NiMܚrӮm9Y7lƵymua60͇GBl̯4tB('. J.m=MݥޑsVwaJ*un*5O `Jtrugag{>twTOLRvmwF[2>9[l6OՖkejs#)tQS-XCfw) _*lTv,"^_DNOR=y\* !-~Iǘ0Z:dY&?*gRzc9iftWz7|+둽]Gpjso8J5%vfꐉ2 $ͷ-#gl,7݄5FظyGvrJȍ+4Jd\,t*9!;;.tv> _%c x+-ָ`L"}akO-IހL^J% `Cnru+Σs3) "U$SdjI1dSYEYRؾKVWmgڰ=JІ{Vkh-w-B+X 4gt~BO 5#}*M4"5*BnRj]djݮ6}Oρn+ZDCm`_!<*vJGHXDN+6H ,BHAT) Mk|C/I䨺i9h[ ~>f-d ld+Ugf6?ѽ DR" SIj˜@HYwR": 7r* CT(T͟EP%Ĩb2dXn,AO9Ht)eǻЭŢkH"`4 !r&bBRYCt ! +7E.xFXO'7v_0Ħ72]!6|+iu}3[twl s/1'&d-6_\͖BCPM-JZgjv37^dGMHCɱlޕ9SR&+EѣIhK Ák/!WCm.aNcן^ G3~h !y$3 l~ܗ pʡAL<V&`@oge{k_<"#4߽:8wطBҊ=c+c_E/}+vW* }`{4FKGpA{W\WZv*dzp%F{WL0XW\7U]J\AR6'otp}nʋI#஌%{N֊wQ'lݔwKN{ ; FC5oz,?O.Cs)?w4G+q P+%Q &?-+7ꥒվJSo<^T*ipߢzVI*ϰ9\Ur+V=57<*K\2:'XB?{Lz_O+ G#o, #l5VrqoF*zOr+v8ypEīy%v 2KFS>l[jwWѺ'.{*CQ!FAvM(d#V2Gowa넳{}rZ 늈$t"z k3큑L_~9]-"=wh)y}OYw%0ڬ>ros|`+մ;`h9lݾ=e.V9G2cWQ,dbRgE/ ؆< ы9ߢZ/UJcBB E=n jLx̷qۊV"דF/>;UǏU1c3)eleWԛ/bӿ=m+>+uM_K?6YρK[}eն:y>.ο}3uE@ : 6Y,԰2 yH} ~v6;To{Zyz7*'ƒꈂIotQWVHhM.#+6:/G( CH@|Nޅ)DAL7NcMȹ?=]fKO4]d-=~s˗.V&c :]B=oP16_]3ϙ\6b,ɗɒt R* }z*@nQ>)k$x!{CΙob71`^S2StTP LE-퍊;vsN+Q+Q o>l4db=&5{SfօL -fl* g%0SȹߢKE1_ƅ)_Sr#jaP{۝rDU)>- J!$2 mxSD:2-=W1d ,% I!.I|emUj;%LH\ډ"֡J`KB^/-JhHMX:lĈET(gKdr9Y+W1[|yW'#+*ًPKSIR>/κdRHYEìbD7@" #9~o*QŹ9ZGD "T aLLxq:I' ad>_!Ua*AdP:ccf]A p鮴4'v#L-<5#QB@P@g+eGm1^hoE m0msS"[ 1RwkAJ IȘ4!Em*I8 PjcF@x(X'ӯ;#}qȶ=KMϳX?= tC.-=nH5d0Pa3kXʰ]S )XdR MOAS!eO: ;mtg#rh2ܦ8$J)lTP,)&PÚkhBZ@Mƻ:99ZZwJaM%$fAz6M#\x~jSbߤ scU7_ugljUUIh%*KhU( 9#NX,H]%*sϣ 8o8~6xhZX7a&$tHنfzѩP}r!)k)"_a|$.+r2$`k`@dŊz$XJLjt H:lǒ٘V6hgD6+\,olZaxO+/q9nW^M}~_.Y z{l؛8?GZ.=tc+-Lt{?aqZxˏ?>ocu8uY 2$ZMD#P7dDn N5dj*$@@c]׈+ڬIT,M#JzU)Ogyr6Bhm&O]hOd-8Ƶm{G/T QP_'=xg4חgFCf<>?j6~zgzF*&Oc- >X`Lo>}Ghڅ(Bxm6 Qhs:B*5~L "_{C0jmTf./\4F(\TQ NTܨ8??w8ppupIrĥ}S` e I*X2Č`vEI2 DC9 f(M2@s'/E!&1B*i-p*!1j;ks@?ꏋwK4-Ǵrl->9r&*Q9V$ ~rw<3T_3^'r FĂI\EH$4VJF3'd=BaUɨ1fDV<ژ%$zR*C}6g&EKK Kڒ9%c=ROce, ( O* ddv;I[_au}??O'˛W.%YN!wO[1%Fk!q>x03F1ljVfe(ƞ' ElJmJcMHh2v̺,%p%m:L"g,nCոXm+Km;JxCsY 7'}P̂#v7([dORNB UaVVq !CsFi D*YIdT'}eyX5KG$""DZr7F8Jĝ !1O(mI%+'.MKvc)-dť2u1ml@r2,$]AdC KZp%DF~2ѫFuU!:qbw"n<6Q.h\D-ZJA]z y&DBH)aRPiwc!T8yxVtԋ8U%Tvяg~ָ0^uzתV a\hdžC4 $m71Vq-^1zcNrzXíf>4׋ͅ9U K_pDi*Cx3n'A̅ƒHmI6gܳs&y1lh1exiw/PǺr >"JО|bZ˘@I+hH_jjqtD5x$ybk艕4 f876%-Zna k2N賊x{7аB*Y>HGJa2p!k ֆ, F&]`9c)4 ̴bgcG͒8fzKZLM+ s~G%A12 ix2%GaA?\ߓ:Uj8gj}O@N tB~"X0vH khSsY2 6th@T;Hs^L>x]lޞkٶzboo.A_Prf+%K-1[#,B2pɒ AL8kbܓ¶ɖid<82Z$9}IwG21scPAKO*Vhj5r8|~w^{+HaW,Kc i?͚+ݑ;p5 edL̜Zd67n$翂ҧYH>=Jݭ:W.9WE*$k;|DP I{핀At?Gʃ#\|]?o0#K \GULv^ͼ_t2`& ;\Iۊir3wI-X7ZJ^?N^ʷ"8ܣηĎ6QzzcC{CnseHƓߪR=f㛛*JjN(6U&$iܷx Ja;EY#m$9ߕn#< Fh[))6r;yn[w9M)=iX5ך = ̶qΔ} &bnfñ +ι*UJH~ʐyU (F )RS4mCq빻*ȂplC8eҜŷ*\@6_\'^ۛNg|UazRx:\] Fa~|O>CQ%w/]hgs>^FU"W=Z>qLYjK *rY,- uC&bYq^SV2 vjT%dWzr>e:lf$;d-$9'kI6-R wrVQryzhN-Z |MJEAޜ(†p |~nS:;m%;xc%: G;nBY7y9$̀[pD:ep*¨uz^@.ʃj֙lmu_rmӡ]xXu59Jr }W*a_|F-ovtD+%/~tQR#$J!/m 'OeHчF%5*i"p%5ju_bw^/1 J'j5:ieR =6`9:)U~C-͸*;O}w:\\_0j'P\, < J)e&AEͮ_RTew`Q[]MϥjƷ2{^ ?w}-UK ߵɽSpj6; t`xR9Mb#TxRJC>4O _gO~ۋa mjc칝ĽՄov^o?.\[ LUo.z*=;,\W9c'Kx@Ud/E[#[лb:~ɋaߋbb -hTWEM|dx{1o0PMuNF|ŧPԓ'*^c%\d(oEhI3~Ο^NCm͂e뙜(kY7&S܆TzEB)<%@%TyѢ=]F[u'5Dᄟv=7c;4[B;EZQJ-{aK^*EZ#)ucu_ipmSU9+L_(Te@GZF,R <";U7ZBRА!J9H&.-Ν:c錝*Hߙ0 FDQèHr"kTL 4SuLWPo1O B%y%:YQR,0%Bq t!:o (`tHxUD7:1D6܌-bF`gcF 3f63l^Q3T+ 3:#Rk?ڣ: ?Iv;fwu1__2y=d93Dwr]&`-(Yw(}ZFG-qZ&(w'ut}i*%ku%rk0٬b+g}7pBh]Ha&Iev<`UysQO|(W(h0Wkiʕ/~s /%U甝SrA Bꝟ[%{`̰:XB [C-d.C 0h GWas(Pݒ&Fk+5At5Y.לk'$hte&q.@N'mLT%3⩣|bii O]qwȡ;Vm}^NQ2oqr-*S|ߗG׫oO|Xk'S{%OM(#uϵ8 4tx/k':0Ж:oJ!.q1&p0`͠i#ai:Fk35ى-D/DFIFxB*[~4;G 4*5ٯ$ GE@hߙvf^ӝ"+dacW[{Cw]Pe=fݛz#/Vǃ )$= `~B4fKY]Nk[O5ܙdٸQ={6pg~nhp#oyhkl:*6M7 ,FoFD*kׄ.^pGĽk.C'kٱC;Ք\/fumw6PsJ6٨TQҜ=hRJvt~?ے xl22f=}FRԣw,s9a͙Z86MkiYs5+m#7 #h xȧTz#˄$MzLJcm :u͜=0El€^RF؍_rw=^-Cj7h+Gn2a L$-niAn&h d7~|tm:dܓ`oQR!hy *U_yA˻guG{a»ZM`^Q3ɼӭfMP/2}l6kjHN+CۛSk<ٸps  ,E@#B4tA2UVyw䆟,|n$|WJB'cAt%jƅk/I$00nʙCn61Ws\\=skOuwGMWC@ʳ[2th~+R)~O_sSY/݊Olc\wQ{cXrTJVNpL571muCQoiKf6[1YLhY|2y)E΍r**9V)URr]$E;b}^ZQ(?x(Zڝ 8G8 ba-<*K9jdڐ,C^ 68n)/t{+3!J dYiGcI*K/N.CRI(a=cIulvF΁s(WB>Bɘ,ћn#]^^N=<Ec _.>1HAi0ZyQ3m".HVFb,<(Fw* :JhZL9Y(e1cdγԞt'$M!X@:#RpU a铰%*QƢ`FHn,𒟄B1aBg_; F Y15Q$)C@R!a΁٦LBԬT:jUMYO%Ej/XxDm*_六+4j{Fa:[?o=}KFPewQz]eeEIBX0yl|&{ &AK  PT9GC@T9SLsF*hQ| t|.zM&q@Gf)Zq;/>?l{*Ͷ _w7bl̢Ch^oti+bßnX$e7|_vGKfĉ`LEQOFΆ48o L1.8b'֐ƒx&O /Op'^W1  Ffw˹krGzoΰpqs~?| _ʏ$efrzztqyuUŝdă)ﵭhDSZ?5F)={ݣ ?kߛ+?,._4SNVwkb.i/w+ݻpO7];b˝XL4WLJm:euMfyOBp:OE_{Y?!fm{VAR&+/|?{q4ˣm>5uMpePCLb:tzh3[s";_FC|fՐe:YJ;i;O۩ͪԯ?k}u7Aט W)gME6ÅL#^&'ICs#'7B?R d ȆZzщl\`R%IL! tSce/\4YayH]& NޅDDu8-&#sbgru/+=Z "n<ϖeqy cH/>u:to7Z0zψWW3۫$H~(pʱڤCsT.1}􎅎{vxB@3NMj,Y 'uLE֠d@`Lh,;9PeϋlaTܗP:2,zJK%$$ϕYL 8C Mtt)})ܨا_>G9X~0*;Q{: n9B`eS9D'rD`jugjdINLyCX BUfDd*qdg$҇؈+@$W]nlZ!5;% QL蕬#cQz"Xq4/J4/7/T:;4 s䋁4SxB=9Ei$f]f2T;C.sE  מ^ (oZFbt3roFd}qW8sPǁ^ɧk A )T≥F8-bҦvyFM/ڶo_iXvܻ+^Sܶg{GVwO&VeophÇv b47DJpjR\_,IVV CưRzv"DdQUQU/ډA dlJChʥu GC ²(Li*EeNaa5RjX:wˊQ4=y[M_^ G17>LLiݩyy3˧Gu.9Zlmx[\yH1|!RܼT};ASa^F&dsWR3DRY%b('Gf5"[EFg p2CpRr6s9x'zΎX,"4̫RzɁWd!ZY!'mg6ji$sqw/?$aWlKxZFoj.z]Ir^JF:T]_ĕӢjSVpQʷ7W>?{ײȱ\{ߖU xao|Cᨧg  !̐@]ydu>Ԥ'Ld0-'#Gr~O_njlgI׋zYJWgvN- ss>2?\n* w7K3ڟ\[;8-&eQѕ 2dfzceTޟJB&ڀTS.{\7~=^~YRbsa/H-T#bl+#?SʽjU>yC1WZĘ"0Z-.Z(VI-؉T[I)sr?כ{ [{OPwc[}cpfi;v^ +n=]][4ڷLQKՏ(y]R"(0%Ԍ3)I\jRG:$c.]{QB6B¦[ӎ-BAilX*ǮcZQ㾃--}ңϼ^F˼1ӣz6>ß[Q=6Bq-3M.Ɠo٥ [fBJe!6^>rċeqj}; Ň\&U4t䔌wjPO*_:{~/Yht><5JrE[ՔW<źP\T+)ײfWN//^;هcSu ]rxƾeFEIsuج ֋S 92EW׾* kIBQ'aϷnbR:WJ06Y>0Gn%3rف(|]s~ɞ6YɈ|Jˀq%/o>'90Tޝ᫏.<Le2^ {lb%V1&އ4p+:/RSdq\ZHg{Ik[aHFa<#cwW4c_,XOXXΌ|Z՛l>bO/)^\\vq8 "r2\J[R$콱k&r55`ȈBm62`/lJƚs< b;% Bokր)2(qv#v*!桠0e0j O v3"13L [1;%d$ĤܫD"'bEV-5ascN4!B3Ԅ(:wuɠGV6TWq0g7.u|s^jCAa/"NxBĭ rU6](מ z\i -D+WGVYdZǙTR65*9#66q#0qv#o*nu..^0-y9.a:: hQ% 8@tsѓ e 碒1֚JՀ&9J;Cv?<|(W-N%(Y{ke?ݛz{s{9_}rD[. &M'N}7vbqym==O{\9W{}*q4Ǧ&<}V汯ە48&['+zFuYjl7!6HFUKQ>+ƹɇ~Y-:Z2׺43`h8K|{ Gq %52T--d\F+6X}K 9JO#ɑk'nd6s"m¼Q#JWab[oWn7׃ K>Fd'jS0p(DUomaKWm3WcWj軗,0[$-RnZfްR +[@9R5/ccq^B,13}׀IqO ;z*ܛYn l7Me[$91EP Ӊ=<}.Þ_nGqq?t|u #j4,M~ 3Лt)&? Jo`Fu;kU;Wt x^B w7Йr?9g_rvs}:gܤs7"{VVzY*#Bi꡷=  JCZQKyBw:v ;᪋±#pե4Α`}\-=Ҿv[o.\CbR uw{w2W :Uo񶗰l}Eˇ pq7bZ^lf({r_!֐"I9~̷g|W_Iw:*;q)zӢJ| T%*),wQNNG,i"]\$&DZU%$XB*e28)5=MU1?i[lE:^TC35s)ĵP 3 ]49iGd\s$,z'/#GJ;bӓbiņ`饊//?]RRk和ܜZ1\&*SpUSJ`P- u>CvP16帄1>`mlʦt(SVUB)F幰};4ZC)y(kd]$t*q[)KBLv(NVa,9C")WhX&rh=ٻpںz ҁDر 0 'h_ƛ_NjKV&kŕ5C'dHyIB_9aaK: FN}RXNiΎj?@WxyvP\SnXe+@S5bueHF& 3„>̢#IG""pU騛M`?[@ r]j"ɬ رP@zM l|']T qԜ #q3F['@zD PfEHFlT ~^$7d :+,l8 $T(s0*qΡ;8DRP^FBsDu8< M{{R AEME2T,grJU.`pf p&>)j\PRɷfayJ +JX_J=[ נ,EtPP֌6Zt $ ܰ F*3|ψZ+iR+ J L57}r)ZULKUT;fcDe .0' /&3</LӢϯk֋ ],x %4ٖqhp&Kǂ2^=,4^c](6PbP}i/8KQe ];)yy1/*,BԒ"4N"eiyM!|`K Q)¼ĸGcqT/#cvks_x8 wmm$F2-"``d/,j)IE|b{N&%Snb7OU}u;Uu q4E:Uc?.9Q[,Φ\ӔIв_F-"uU  ITEr0=ځ1̢Qj]P!碦q8,r4vfHƸ>q@wo~4 |@SB#XH9njz(!/0%\CD[yNQ.eH ThO Uh@%o =>N(d `P}֍M`r+Ap trH):lBg4+C2 -TmB&4HbQ:TC!U=$X*![jâHY=+NFh "KkL蹑ݙZ[XYu(A $AZK#B5%iS@2w׬a\U8/ڔ!-5Be_j%j!@J !,gf-+!-tX /jWօb4"qc8`SC%#p];4"ye\`Æ\;nY7@U#r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^3!~WD 8"NRXrD'n`r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^H}g̓@ya\Z{W"' r)@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 @Z ɳq\ w@MW%:bޑ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 r@_oNcѫדմzlo_mʝJɳQ^N0#,q IF`K@o\- n\2aƵ Ο+~h|+]6=Ӄ+CW sC+\ʭ']jPL`CW C>m'zt%5F2=jp ]5J%^ ])楽+qkw>IO< ^N*j<1 q4 PqꏯV֫7,Lg4;^NKzPT:7G!R?{pe}^fK?na鸬 >Z-s:*Um~gQ~U0%>+ͳf'q#a4{wv~VQ뎱Zq[uﴯs+/?jb(S[fnPrQfSe~r@v4&޿P(L7Lq&6۶K+뤔S·\HPsJD@jozgkBʶS.x:4fyPsz9QlZYע+\N%滨a)mqP޾+b|U=[+5Ss2ikbβfk!Ԉ-O+ß~at1/Oh1;cN?aȀJ]}vL13g}%?hjh5c}Zh(98 `?ufu|0tЪτ6fB_"]!-ɮ\9JGK+Tj@t{3\utiK+ꁥk/Zg[LkCW P^ ]yo vr0tUC+z]5]}7teg矻 |Kt\iCk#ʾ]=]ݷZѕh򃡫WUCl骡 ՓЕdI3 `o`j(t%aJ鈮^ ]2>? j ]bt%ҕpB^_gDɏθܚK-ĀhpWfήiDKiÍC329%W Z{j(!ztewvHt^UP誡}lg h / lp/14[{$^]ya ~ԡ5+5Rh76;byj?~p ^}7쇲o+nrDWmz.~YJ!NW e]= ] i`بUkP誡}.k>(mډ^ѕT9jp ]5LJP=8I誽=^}qه"tkJ_QWeCyA]3u!*g{w+=Yo&Fkx`1tq9De҆Yw[*zRXmY@πX0yn%[f͕%O7Yd8XڔZqׁRٵm2^w~Y}./fgon?r6 q|en!\#<4mv|=XչqYе{p"w1Je„mՋ(ty!Zddd29\>D$GВ%O!~*wą9|sYאַhoͫʹ›YTY&K55&E'UxܤOw nvj[ 柏6RWr'_P/|~>;9>1'*y[Sڅ>y[>iO?B- 2}@.J:rPQ?KQo5(ҭl:>{?{Jhu%*kj%STs,m(Kgn~qꄋSN ;峩!=HX{h?ǚX"@tc=Sv T/vLR1J*Wc99RJn5EJΜ հ5J>j.:pXp :qK)7~1hN՗c~'kjDY ✮s,GnNpRuғ(3OhsPuDVV[9 1KN*xtrg]`Z'kO2%3ut@˗q;'f?yOS \W.gߟk=rEgi~~f[ -N,8Bi3<Ѯ戞ixPk̍.qV.*cqpд/ ҊhWGyʜ\s Pjh% ~`pNr;-qCra˭;p.nWeoq}-zܫK|o+\(b,dDheǂ!Ike[ bP,窲*9h_a&&1:Z.CnjJF瓊A"S*fϲ:p_9o:QOnzJ$<l j|N"Hm?U2Ae)H6qݚBwekX! u1qWCtpQ+0%KfN8[!(9T wv~wZFvzhԧ>0Z!;Y*5]T9BBI-OQE(z.J~(=G_{ʶ6s7Ot3M:9tC̕W&an</5=|X99x z" QV{W)#21eV*x^dY>]+תgh齳}eG.M /ӲVMq^|,yLFs͏rYIF׿J^p࿫mzn28ٱٰ{ߖ-m9=&]YZ:>-0X MHoPّ}1oaTyR=͹r Ewir Y7?H ;Y ?{F! Y|;oIrfWl$;2FDU?V:Q_Nli3-u]ŖB2k(%:SOdT+O&Yll”(i6r9[SJs"-:K("%dESB YG`5#~Ž=wUA'ˏO'>6bt&鲉H3crT|$hpZfA+ci$ *˿ %iYT)QƬā2 QKK&c9 j~Wy[Xb~Ɂ1W1@arl~ J*x}*̓>72S56 kolRɡ)*߾j $EgƩT V&TJ+r)z'B -R!mSQzrĈƥ jJRXZ#c3rLWi8P,tNXXxAQyՌ_m'i}w޾ɽЅo'3Gl0Ŗ7~JU̔8kAe%T`Afkp1Tc/00("PPaSY&˺JY{[b.)s;b4]11ǂ͸PƨjOyĄ/.ly} !K}5:8RRNՓ\2c2)c]FjY3XeU'V,d0F6+aب.x،7k8 cAfq("65~B DY(2\GkI۾F>6m\GMr 5)"zBX6F+uR%TDufcS:1FA;Ա5Ff܎e~ĸ|=.7B..Nףjd(,*Xm|0b &hX'2J Gln0<|(¸}*[Ob|/F߹(oG??vW^Xww=,AuwZfQ  Y.p}eSA Oi)(ZrS JICI))KAN.h9Ն|⬧=,C 6qrL;al?+x^]isuS#U2_@,-92Hk m0 2O5I(''DxD| 0<0$LUd,C'2HTӺ ],}ʅ m4VQ6]."T۫k/}rNaT`M;9E104#(фjw J?.YGw?Wۑ[G쟦Jnl7\`M~q?M<O Da&kLL9Q$>嬶&!RY $OE)=O/VGi3!ϳCHBQ& .*ȬrM n|}ACC4&i 1pؓ/7f]jq1<$ 'VS+.8u3$u͇p?ՂG\d`{9_{4m|u )_m^KgTcgr|G?/7Cۥ߸og?5e+G g77*.Ba:PM%fgRiod!5/3q{.C_s??!šӻLo5̭gg}0]w55og$|()vLtxȿHsskk/ΨRA?{}gf;_=tJ;8^x?A>k}d-~lsHH^)/cٻtX<&U3`"|y70 KRfZ,gÕnV7_GP~']1r!ˣ?gg9Vn \c#:< ;Oj1<%+GL`^ C`vG.o -?hZ>0IC&aF޷$T\Lc݄wZ5:k'Ӟ m)w:i *-"P6+g9Ui6[;=*aGk]-W*$(Z>]t#7ohӭAw*)Q1٣ x\u^dQVX&[Xurݳw?TtM?NU~er}#WClnkij׉n퇹j tъ(sRG^"X.w}jI?{ Zv;38G×qXgWpJ)ٖdͥ 1VRgHFBy@R"QK dY$Lm8;^m=YuMfļ@(qyY)9fy/狘wDnN^z%ꊽ]xNڤlBLd\VO(͇06ERMf(1VT@ !]ђAJrX a)2NLD`ZpW&ak1$ j+DtjGYnN3@ZbPQK/8 C [b4ZJ>kw0 j[ZVYHk1T2/2EѲY\*ߤ5Qj]&x'oxտaD`390"wMGwݰz69+掹/֣ZCeI Au tV p){3ls@#AC.t/ܽ}Ԅ<U}WFdERؒY/}D'S9 ~I4m6peL\TcD>z.8_ޱᱷ\yh e If@@~ܗptڃ ɀ(șYDBV);s]ᇻW<^dw owZ6[٣N7oUhEXV{]`FpSsIQu:(t i?Y5j|$Ūygc$ \0NX`R- YRQ$[0FkUHPʄ|Ic}]&GX_p UZ+Fȹ]1Bc[qF[{Cїj-Bm^پN7R}(>Bʎ(f+H/ @ӕPmA)Q$L 1xA30(A9P,B2#g/jfP6F(9;W9w^>?1jèIH$G.9 sR^@][o#7+yY@pNdve'>d`[Z8OUDzrj:@2H?VU.[m>DP|H*a t|/$-xzO$sPca$%xڐ#)hk@yƓ`3SrH4AN 74Idp V q1{54<[C0@GJxu&}K e x+K&J`&%LeҜgj ޅxP\SAL aulש+0,Ddt]8Gkeu~Bߋ~ڡg #/hFWUx%9O370iSӥd%6f6".#yj6@o 15`CG('l|EƧZ_|o/颾߭4$'@r@H.Dp)r 7~q%(Mo>z1s1őSG$R?QO-WQZi PLh0[=MmXJzu-g}{5 Ƅ*wuPhԴY-4GW]Y󥋬T`@+ZUS(x4Дo<+]Rӣ8Yv\D50р%w;%#%Y@8OaO*:X$NE-&#3#3.>":dX玕༬,.yOYTn FiƍY8}]㸹VƻWwWCf.2և ̢YS=髌Y˺/q{_-ⴋ}0NWF\9HF)䠦;:/DBw;t: #ˎZ %3,D)7Y@USuJ.%%3jS^P>TDY&1Z)e,wVzi:VS;AT:8V iWU$9P3G}yU(lbʌrX9Gɋ?DA`mWyjG*gP$Lx3wSvky2_7AMhU@L cBQpPGSR;LQǠـyyej@yY1fѝ~Y7wЃ}Po8;N𾭻|eWxZn6_7y#"dК|uco mA> m~}R]94k5rNߐb\MtxpN/t˙dOVKi 7o/)lƓVշ{!orO닟e1GuߦT^?&UWnk3r_ŐP>jaO۞Kl"}҂omTD٨h:&L]@\?Z4 zY ԌXO# ҴqCF \>is|:n0Sfa|¥aobNwxwѭ"x|-&j68f+T=TFA׏Ԓjp)~$U#GbHof.=w~l ӏk0ȕuTBYd\啐`2`5_N140Wx&OC ˂q$Dy$*8ufdI1L(Jh$Ltofٞlzg+o*<Ϛ>6},U`u En-vMn8s{2 c?$qn?e]d?iI9pVsň.ŠRֿ*G}2e GjTuaf;hZ{Ҵ;?q:_\LA2KŽ̕WƓT}eUVd *={w=]zz?x Ya0"c**$%4 L%uu+kbEǥs.Eh,"+!}ZcL{Iƴ҄gֱ38/အ ݧbwG:ڞ+Z +,g]!\J+@;]!J61"])?-0hWWR JwBJtut$`KQ ]!\] ]Zjd P2":@Eś`E @Bt(:DVrSRChS ]\I7C]}tESύ5T9ެ-`Az-\]FGmQ*'cQmKWge KS ]!\J+DkI *+N%VCWWR n]] bjh);]!JF:DT5L.}ROhXgm nv0Pdg5w4}X>xmUbY{zv~yBx?~ݸ0M`+KP:s4g2Oo]`[w*/L淿QIVWTD] O|,'6Jeq5Nmo#\~fALwW^ 6_# |>H->SU4WI/Uv";Z|eNHGDED [f|d^%n_kD74Ad߄p^Z6%^!̚F@@nUB3>VKO@moI hXxB%DR\ w\t-(*!ּO(5ĔBWv/YڢB tut1DW؊r*&hj ] ]FDCWR Tvute b blFtU:]- DJW+R &{6[5Jڠ4mAWt]rbz]l/唐dr+ R 2wBtutŹLDWX+[ ]\c+@k ;]qn] ] 0VqA} eQ#9b ˄3kۢ߂k i 6;M#\iKiDeiQ>DBQU +爡[ ]!ڷ{=]!J:@RXS])a-~P ]Zmzv8b8H‡lʡ+ki^ߵFI@WHWFJ&XAtM9G c ZN{OW tuteaE`Nd1tp)콫Q!<+Գ1QXwj}ߕlWU;| %[ւ@WN=%^KWr *Z ]!ZKNW-@WBWLiZmSƊ++d)th;]!ƅ&\d "w|]!\KJ+@+H QR1ҕ cNV 2|Hi=dyVV,vG5Wi 7R,&IYnփfIm&w^pkp* Tʠ[i -9A V, R'?U9!V;8r+!+/ɪb~ 8 {?ɖ#ɛ[#uyQKo9ݢ7+ y3ZկB:w*&6+ 7=ySV-T]#4lW| Ln[v\JW)E[• Nn32)?#a*ɟpu) m W&X 7S Bm_j7!p=qj/Mܗ~~j_{\'Pi\ Wz%i ְ\ܗ~~jq WGahC2\\q[v\JWBXL7+Vpej_{dO:F\Qkσz^ln[ af0mry361Ӧ2 GR-&Xfperf`%z\O:B\"~C跳fgrfܕv\J=]b8F\%.l4"jrlWP]X;L?]b8F\eӆpS ܸ\ڤkTfwWġWV_~j/9 j?Bj?kq+>C=vmW&7R^;L WGblIU VWrmJ\EJ_Õ N\Anv]SypsRFwn W&X& Lm^=һӶcUb6+ְwer9mWVW+Lk{Z W_Wsp[g6+7j)~T]~p%Ozy$D^սR~r),S+/T>*p%{JNz{aC`r~32mWqe*3pu !\Ap$ LWE2zwe*3pu\v{6ŵ *W _ZҙL%RU,ߣ1D´ q36oPӦ GiB__+fp+Sqe* WLj!\AP LvLqe*׶+U6U ` ɍ\Z]QSOkvLj$K &8oL1qe* WLj1oj' LnN[UVu^׎+S[pOʑ!>WK/|ep?-j?ie'\=y *In]vDsvw7gsqyi-9 Gr۾??w}wx5C@,v?w(OY:3szƾ˫VnK޽xɹ7{4\^]#XH9F{8ͮiP`g׋;ysw]p3' Btg?~d6y oßd}8ow?ڳ1Sn}vךBڽn6?ɪxe|ZXy|Pe~F,u7 w3S~f0Aux>f[~_~n| f÷WʝC7ߗ uͣN3B$c^P}#:|[SB_v;}7;$?eB|}J$Wף_nMw_ >\.1s34B7 *8!c%'p=W͡> r=WB_+e Z0 vS]=1v>-u@hx~^hxI:1!U,(%GohsatHXT9yC1А\1:FjozfZ2,5j qQ-6Qrz[XXS(Saʀd/Td(gpn9"{ 9 = &3q=sJ}C9{cύ)VK/fmߣ/4wV+ JjpāM#:\(:1T J?#AcJ 4fPs&-O-m?4GJgQHL;`DLh).KlKJ Eu JZ$Z98_5Dc~:7%xUͫur 2xp V|hI9# 7%8ߒH\cK9-{ݒ#v/T0Ǒ֑~A_O2+Kj3JK b] TCJQjDS%ՇX:/|Ab(ZoCn8(E?$y[ UFD/u.考w~exS@vO E-6XLKbp RR,kEnq0xN<{jtZ K.XvIg{!&8Q.\G<Īu%N| Bsk:ʱ%;zPB2f&gXw0;8eA0%7{΢X'tؘȆZ=sEh%>{VQ.֖9!NIڔN5 R=Xeg 8!cC0" bL8}>X 6CEZkPtQ,,8:M;.s 007x-B` 'Q D,V"a3Df<}nъajoLEwVq)4GRF[|ОewF"=d͟H_(}8劂TɮXեEt >)i"FߏcΊ FyuU`XH/W.Q#5h!dY 1#*Lbb! PE U+cfCSAS Ug ]Nݽڇ^K]ɉb,@T<5vNޣ2_~Jy >e>\f=f Ͷ@̔A-$ >:|:ppiNY2\a:۪dI*,]+O{|q@XbY1F倭 /Uj(wK:Ndo/}J,:=p -wdoQe!v~ﯯ./ ~3d|b+|mHi zܥR,^Aiuj KsMҔQ~B,#,!e--hWj9a׈wN,& t ^ػlƖ ,?=hF ;ʛn?!GHa)U?]q(+8C A1+'ZMִ$+0=>! A{9PbOAir<إW3(23AKe B>;Fe RDKuYb('S}B;lֳbXƱ a>O'"$_r3`F\,; RAy6Dyǰ:a‰K+!XYPcd$u#Ԉ!YO#XzH:!fm\ ֍D5<'oLfnu :hփ+û>+|K> f20 T "T[jkErF]Jϐr%bޤCw%*CւJD/u0:zw1$2SGi!e/fAV`ׅi+k|TZ{u(T0 .ImxB<.* Ū$1[z:y\IC,Vcu ܤBER*BEN f $!K5P˰* t#B㞢5OL0gw[l#Ϋ{qVްjF#Bex=ʩΑLJ7?8}׿>z?j! By):g/Ha~)o;[(ǣ}ףa߮nߏ|3]io; ՃR=x>r~m3S?snwϮN ~/|+޹qdi 2Mԍ8^``2  ^mŒ)=IQmKtXȮ>DbwoU4qb񱷎fbETg`ttk {[,Wxr6a_=L6yAWVK|ټܧKsaSiS=C^O²-i_fާ.WO}h)3Z/Ψbe: +.(9"zpoBpEUÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wb:pB*D>;\!-R$.5QJ WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\pgm8'3\!nl WHT+l*᪋+j9i%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕpuL5pE1\-LOgBZ7\Qb p%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕpu`柽4J]fMƷp,d6.ބ 5Oa0|a?60uxRz/.þzNH3x6rEh.WDEAEFrmilwJs+KW\"W+ʘBnVgTV[.:ԁp\Nx 7B{K T&%dV9G#Z]_߫}'tn`bʦ ",\NH7?ϮMomlK6!L_~~NO+ PXQ i磦ic)xzVi2 K0) ڷVE7OCCwCx>iLVMT.X K*(lkӄ4Wv\(*tbtPͦFO5ֲπXLüJ؍>4 ff񀷏:rcfm2rBWE۱<ޫR=X﷍ =ޯxb>txA9UZWVa U`ُ/76Z \\һ1(] Zp$W"\oцX\e.ʕw:H8Y>&c3-hC(]2Up`$WTb#Wg(\E!yFr.6r^+=vz\K"W՜䊀#ILj MцT\!S|*YaSZ.WyUV-O:Σ-y$(-EUȐ rت>mqD<҂Q6A$reN#B`Љ\n\ i*^$ʕ a$W |+lh-]2-x4Cߪ΂7:؝3ӜJLn4\dhJ4gEvQ) by*"\F(FAu+N#F0E+]2C*MN#Oy*"fW|c( 1tQbLmtr]qFs+T\\SW;Qlp]m}WD)3gǁJ` vgᆖy*2&W1Cc^x+_4mLᶽA;p"JE:(Wj.V\!p\nb]YQdW]+P F7EuQ:#rE*4VO*D]!QK9EﻔWl]y贶V:.[dls~eEqZ"VCYq? -sCcZYӁQ oޝc .'_ޭGkE7c*,2tA~zU Obѥ__ hb~ڼw,닼g)4W/͹Cfݫ>r:Vvs>~7^_Oj6,677x<~vW &jl{/J FCh5&% ai>S}H;Bj2wd 21y4#&)!/B&؃B&houb"IM'&Ztb-~F Q$tbӹ?n4]2n<|o_|.hr{l^}ωǤ{J6zkmo{'&Bl#FV맨uTѤ‡jr}jez(0&T`W}w^Tͅ'.bb>T!Y>;La,C2nj:jp8?ۺ bwTkZH!(8fPvP%Tژ]3ÜA262J 8p#iDegsoTFF70y:ѢXPmy6y:0$VZ6jɇ -ͼ'ŧ˪͜ }# iMIq>ݸ=(cѹݝ|ZC*Er>؝C)18GөXd[&z+xKxɆD] U<frue=x7UW)~vllDԜW~U屴eɇJ!Cszx><ĵpi!-{yHUv^yÒMxݹQXsXJ-wdTٯ7E9Rv}w'f |oFL{tuVݞpb&1Ϧ'<Lʐ@ؐ|[BQC`eF,BAsɐ6CʤA2fHIG8 !0X> :\!w:\ճgէlUo{<ܶ}΢ )#WYQ&W)Cc^TFrETZE/WDUʘNɕ1Q>YGp"6:\Ypj ">O%Wk4(KۡO$rU#ְOWzY:$/fnoZnmZd2k,`gs1S)LF?&#ma4;0LlQ[^f>__/oK_?vof>?lzϣ7Vj_}拶wc'~5lq@R5i52ћ[cǷ}k}4|U,]FJ>ul>ig!pl2MM Z_z@.I &iH&E6S\%HrA%)"&6/Ҷt"JD:(W(N[ #UE Qz\EITrEm/CT[bQ)I*9EI+\nb3w(bt"J#ճ+|AUZ 3M1nyc&mW2i];6LP֦Z/WcESڹ"#"`E6 ) "W+ߧO*Wg#W""X|ve\R"W+1D #B`"1H-*G [I4r*L7.\}nr1[]cn7bmW_F+s5Ýs"V+pp7bK!k H7zRmڱ檲֎+%_t^tbvy5k}}e7At#]4+T8N93Y<<>ζ:GJ4T0Vjh'B磎#{@,^@u֕t5K,^t{Ȧkp LV:ص|22+gZ z67DAJQ{rQ!0+V+6ۀr\e"W gFdWg}*\4M:&ܨ҂RQjHtQRH 8y$RTJtL,>E+g끊&rms|;r۶5ַzZ&e,k 3J\=u#B`| qIG^+uP\e\L QlpAs+ t"dD:(W6Yc#B)q+P|vEA\uP: .o6b 8P1z ծNEHػF 7Cڠ]2M6LwPR!EFr,"\F6(K3\DAsb `'"F]|Qi*5!|79.r Qi vQ85f#WkBZW~vE #WfϪ7Kwg{~RrUk+G k 2"WzzQH+O(W"Wm(58ʕI9Hc#WBԖ;(Ub$WXH"WHcR6EU;r TF&y%3YrAs|B6hAKF%/;A{fs: ߖ&ՂFBr5>|p=WOˣu&6Q(M.6 W[aTS)Sfh jm;ELp3-p^( .])-2A8i83BD$\^c ^Ndqq 0i~y6Y-{:X?Hmïg}3!~uMا3?Ko/?t#@W޴"W忬zgg{`fU^A/< ϖ!Ej /gx)Bmé X78B*>V|u[.蹾>fg+ zޝ.w ;6h.^}o^F<Âq{ƣ6ej|޾~ܾyڞaN\!W\mC D A䪃r֚SvE1+5Mchۚ͟G$\*rFɮװɮ6/WHit?=*oۗ<'p឴oʢ wU)l?{ƍ Oɞ3"U!Jf7yY\3y%[(ZC$\fF7>t7h"zhUfЕA,te :]R&:AB #+ x5\b+JN`,s?+Whʠ}KPZiɅyxm$hݖz#xWKGR[8JPtO/% :!zIw`E;]x'P@E윀3B@BR"J-hhR .籤4ZIZ8q]iCW.ЕA+htQ ] ]1I),FF ]0OW%I̧HW\ b  ?h ZC+`J $h'4hn'ӝ J+zzL8]y⥽+?/{AK|c?utE]=Wz5Eڽc +Xʠe4t2(HtutdGDW "2h_*Cٺ ]iŻuE-EӕAd+ `O0H BW-¡ӕAIS+,D5E`0l9A~)Y.7`Uhhe26he^F)@ٝ"M3iGTf4QXʠ};yPJ#s("2 \wNW%Itut%Q՛ipWEWfGT8x2(I O$ƀ]i 0`Рeh]u+zv0CЕ` _ڻ^8CR~(C+`t]=W`FDWpoHW.ЕAIteP  Đ `ɢ+ h+ʠ0 1]ih=Y=ѕARg%P $]t,Q^o^4lpnp1gc{߭dЖ}֣=wR3$EQ+>/?"}1y`%׼'ү1UF:[MrmVĿ}go*MoPϛ)>v2 w |LtANA^٩S>s; ^UTGC衬|5Yfe~TtFM<Ѫ gsa1"D> 1Ѵ t/.NS1Q^&Z7?ӤNH=i2YZ"I!v, gqN9g!g}ͼqQZ xo5_j `-f}Ηj4OVs萰 %UJO;L4T:dR+#|\qrL\ߛJfNY??z;Գ3Sl5+߮z Oss5y95˫W!Y^W8#Lg)AjTJ算|oCԊb:Z )."ל=gnl?+]y6Y^~Ҡ^2M*4Q#n<.HO[>!"L\S\į>TG>=(O:>_vh}qЗڷ؝yw>ZY-՜v2MCI>)0Er`X¶iJғR֛ZǨ%^]sL*fć >4}Hb:]qW*%'65ܮ {aSᏗs'_:W7oޛ2_))'u^u咁Ԑ XT9`VUba١Z-BxJ(*!a.(uB qʺDHQV5.*X[C(UVع3;<7%v?fOM>[?Ɏ՛| -f 5z}yNݏ,C^ͫH;]E >򜒻 ˍ~YM&zIoaf 8ڿc_zrf3x۝$Q<3&1fÁJq|C:%гWN!41)NSMS$e,±MbFJޢ,2"%i9U/2Jf.YRg-8nƗ+@p>,\jI/@63SJ@lS@F^ J;c3=,R6!8!GbBdA/^`s$h*N G1KY(/P{Y/&ɋы2^`Щ*{Tלc,ݰMFK"e>? fu[C5@S'W/lA'8 GD^va4XKhS/t!D,dϘ1F!$8ti\)gҬ6٤g$<<蜜m-c1P'B>;H~(T351duWCVS i+|a|6s ߤyo\t~sS`FKW[[>pd+W)Qsg:a!I%ÌZo'w D tK;}ʮ<؞e2~~pb䒈F~pq׷L3a`Jl$uFz)?xiYq aÆ#jzwdZ k#NL"r@:⒒5Ү@R**KYH :缦0*1cU,4b>g9k"!xo.ߋ"-b1/sݳC^`ޜnmZpvu$96V%! D!pXrՆsH>L=ˑܮxG H_.km֩泱Uߞ1]2`:zNV:ޢ: ix Z.!ST(D^dO2<2_JEm/m!L6\w4Sn6' gu>oi gKxLsIBxe .ň1R$IQ:>Zcgu,ŒA t\D;325\ 'v<` ǃkaLI&? -#?hA?͊tn!-1 W&X Eo u%ٿ%- L- }*Ag@b֚Fa_)$ܚU["qUz^) Hs /ʊ)xYI1*dPIT5|X1huO'`q<}F9ee2>kMY21ؑ%`xak x$"!^ȱ\R#u2t_K0"VAļ'Z6f\\0?[#zoٛJ  gpyV6戮g%A\u%An!^xϠi`Z'6NP&g{pYڜO՟#2L~sQg\|`%^ $C0o6Sm6Gg';3$Q1{q*ƞ EqnFxp6ΚY|uQx_xZ)=S :;|USb Re&ct 赚?[B:эdv|Q&&bVte omG;ɸ_]h"3Bu$"aodX~'#>">|20<,M\uZBd]y(Kp):Ґ>"Wl| 1f̄DΛղzw}1Y!x-UvUW&كީ:ϷwZmL{k#dxo4hNæS.lC_yO^#>^sgIeP?mGw2LdhGda9Ytx?)]T?t5C9%tOQW⋢ eFOLrqޣo^s^v~}L|F1 t8_rC.1 ^pGq4\t.Rևo `z=\樋\{w*|=irqIIMkNy6_ꊡv".竅e7Y.'U BG&ߩf:m7 qsKVF߫V?,FfV7VO߮W?iu\=757M٨*rP)qҗJ QJaټ:Y,/ w7:N>9Ǣy iWEeE0)Q}[ tyţ ;)p=(pW`5$HMD3Qԃ\^U2*bp*ΣIJUEV f,U9҃Ue-Tb2 X*gg;qN[jsyj;ѣ8S[WXVM`q;)vr_JKNd4]Ag9"<$nz=*6':Q()z2-RUXf 7cG|/# m6xgӱ\wŽs1~9'~&˥XVYQa$\P%P ᢚbxǍ܋;[JtyvRTqhYG0,xǑXw:%oGĂc%H#YN1̊1(#d?[f~~0 I^x^?}76<2تtf#?*oޫ .ͽp r>I|q%#d'ExǑAnޭTs~J쐋fqV\d?#4v',J7[V?^'Vw?Ry*א!;a׷+P @I$<̞ǣ|t&>tDEB5^㺽 +{j wWuI[ΓZ?o)ok6gf\}8\9-~7wyص-]j)rbu9p_k\,B_r?@ۏL)3B]N-ylܨBnQe p&{u=׌in勣,nTϑЕ~}Z1{6Ɗ׊Wͼ̗~t֤v\yjk -ޅQ!4u6zGbDaMC&OeE 0"6hQ,T/o"ηoJW| G G?G]+-u0 IFTM@~'IuT)!X`SA2vo1G1ŇAC5P[5$z1%Y?nA+Ɠ R\_z( qUSՒ z1IVyLTC5|ȃHw5%hE-  cSlܩeo͇ܽHƒ7[J+( GaP?Hx2@PYƀ]/Wgb V?θg]jߠ?4 ;Ť>o_gi7d8Hd -+B%CHֻGI! n Zi~dX\8Ԅ,ݔ[T)^f"+8C"/<nP={e]"[Vٯ\kn ?y0ۮ5mk VQꐳ/a@Vym.WGe[1?gi@@k ay3lO%Un0*+'Fa&i*VMU߭b2 [:ʪY">7 i"\d36>d~sxE*G4ViV$dH4jy 5eJp wc"T. ]xM9)U7Y^~R,s-EE YĦh.J٬gh)한r>{x$!EZ )/ <אKǯ'G[K=XN$? {@Cz32Qhן)jN;VXXN@31Z,O5n2vD;8Ǡ89pϸʐ@)7ythM3Q IgfaNޮv^Ӕ=N 8F{Lܜ;Y:.Kv!}DjCuWTUcQ $ϑG`X Ild?\>dCW`aREf՛|U-"Gt'ScY :Յ[*Q]{'O1ǗSOr&-T$M,:kP 'bU N>nƝR8kwjS|ߧV2?\DN{S_醅1YzVX1#Lss5P)%_O76ϼB'&IKbi [ W"S?^vHfz7L@L PƄ="2^ VJYK7W{5uҁF55ɴZjmoDFmwiEJ{W6K542մJiZ\?‡ kF"[=-5]{璩sN>'Tb(BsïZ,cz/^víx"""=yAy"Zrs2$h}'(‘rqٴ&Bt)%NvSW*fjIS{7t棎>#":2O"‹%pd<(\#iOәGV՜7)lx~FR* FM6L?XG>vnR Hi8>J8L+F =&Qwls '86dĢ MƐFC`bݲaP=l i|0ֱƍ6I.ai:sJ%ݸN'cȑpLh{E.tS0/Hf{Eq1.VM@Z ᱒e&ϲ孇se눠px6>qv5lO? GUU}9<vʓrrR //o,.܁5;@X&K7j?eb:mf2ocaup*HE-`uqfv;Xf9dnJMcy!֛S4{z$g']5aeNwkܤJvR'Xm -b٢!c^WLK)t".eߓNHCe[zu=2!YoGAҔoZ7ϕoӐgHP ;V|z$C6&BN-W\AaМ//^G(BgAeAe1Q2* fU${il\-QK4y.t1A#`V%D$_r|=Tր7R&.` v~݉,?|RUn'ulj6C?YiM]*72 ]}5SC<Xּ2hjEP* rm o&I >̤S4g49t˃+% L9B"ޏ u-08Z[Z>TgRY稛gUW/0[[\TOo@Nos+>©tƜR7`tvJ9eԮÿGΈiC!'1f=|%u! ~v?9&!-S2oƽ&!fbo" :ۑ7 {&j+QEì$;'&`- ; REǸʝzmf 0c_lJDT[1xTDTH/jYlXAvabTyc \\@XuhvKJ) >VAL:Thg'V+.n Xv!$&ZpH}ihARb]BaxO2ݸŋQӣ(oMׄaʈAcI}ݼw8qd4Z95K1t ]~Hz}ݾF gkּH;['|>is):c"EL_X{3 6P{cf#d[T`SW $a8?B-NiWO\$T%;{qǏQF"$Nn<~6dlTYy ~|>-\] .ٽN)?&aSw򪞺b/ro/1/o_q>xrKwr;z&XՅ8"{'3j?'kwiՖΤrѝ:0].-uT(|R3/nAJx ,Kσw^b}1 0@ >Cw9O3_2'D~}\sw7.~ephìZ1`vK&vOJ ¸Qv0):{'<U >RNK=U@x5o!IIT^vߨåG@{=JA j"/{ʻj8K {`Vz!Vȯ@,eL]Dԅ]H¸;#!&i[Қ ҩ,4SL%Ni!j-sN'9PF|x3.ĩH-&XA"8MqwDaBM ;GFN?8E髠1twݏhjɧ;^L<h-8BE[(  `Ԯ9 ܴԅL{SS!"M-) L CjdH/k]Lp-ڗ B U0Ĝw嬼U@L+ $PJ%8'Rry"cM2 !!v*Pt '& Qu 0Nήx|?YD~\??Brd+tQKD9˴u>I!v:٣ -aj3K?yx ,svZ (s!b Zsƍz~|Wm TrXh\k A H R8@Jx k1 d \eX8܂k8 gY in)N\#3h56)TTrj;`GK"Ss9#bP2UV36+-0 rMkIĻY3Ld?XwoaIp!A7O"s<=r I0嘟&QvCT-> Ppt[fBj_@޴pCqdTLv޿'lp*u6^)E!dG:*[":iEP+Ө+ 5[Z[S)L]|[)ҠE% 0ΒW Ҽޭ! X\Dg0A"T<<<\WWۂT j&nTQ)\a'RMmI3X&ɸ 0j;\9eUD`F!c JOu!b`-M(1-sB nq);[PA\`]?ݞqagX>ߔg$7PX0@W +,BbPн\xy?8LJ%R/QkR;@^ڸ獮5|o}ܷuָr~;r m$΄\)1̫qx|~>Ϳ%-DYț_8s?>kߧvԮ}uOm3fh"r K^*[(DCq&s4._?,G bf3Ϝ b}2B*=:^|%/`> 緃~;.zwyăb1>-}#?KU*dlX KY)׊ֳK!-j#&R$/=G)"Ioקb|X$%g *_}GpˤgLژwށlqiɪQ?5-ћ1oE'׾\jIռY˩pdFYgQU}3S<3T"˳ +&6ocEOJʌ4r=yJ@+2Ҷcb7Rk_wK}uԯJKEaic#,p/9̢\ Z0e m!p0bX?DY%Eӕ Hz<8^IXCk]t}wѵ.vUb8#XpNΎBS 0݇38d̝Μ+P~gzꦽPV;olX|F|{l7 m$P 4ρ(Q! j_^N<ʘc RW:,n}ѯhh7x OQ?r@;͵&9cNp[ (E dVqpQdziOv!r_hbuIcum#w?v|7c-t46ƽF6u6 ؋6!>,Xho2IcLy`'lLƸ(b#YRIzX)1!L(Ң@b >q+}CN]XSo8f,w}r[H`擪(V_eqۙz&[.M|yF|<v8 #]ˤٙ>nNڣdwJ TuA|QOc",G5+|1_g(%LaCi.|zp=j0:>%±qKKޯ\IIQFIaBu.Rgη& #rTPNl# !wN:ӪL x-uv\G&#-M[*鱚;} Yë~Zx2v =)|=6qtM#K~Bo͜"r殛FYcέSqTk0sSffjt8GCcG4ߢqBp㗙Qv|gZt釨V#ܭ6s4 IBCe Cy" /4BP رӮ_ BPZ0z):C@[F !0p~{4Zg8)RG\}-h Tbm?~$g>CdQ#j-,aATU/|emƨ|~Me@堰Z#udMܨo_[-F s@I Q'9T V06v5cQ^)ȹX*n\ }`WXі(r}{n:\()ˁ 3wLHar q4,Po;ML'ւWrzw 8LP<7z Xjb1~xvOÔvbS嶺'd:bاm:Px@UydmkPЦ]/*Jx >x)οQr>6hcG0BgVjYDz D iB5oY!|AI ADmmh ~ݐ܋mI*-q Nõށe~_i5M ~]PbǸVA0fKyYOn}dg+}vY"3 [QC \ )g:o~]4Pc"uԂNo)b?@J ( bY ޛ)гl1:yChyIo{_pB꘬ 0'u>cbE6Įm>\F w᤟+PJog *>+1=ڍNٻmlW Va`:<HU[٪5G^E6eKV'ts$~Oo*eNIf%<ڙo&ddy \bLDeQI"le7l$;d`*Jn˰jz]q5T-QTBPhS3 ӆH*dB5@DόQdHjRJ.$հ r.G%I6yNG_`!?bvmmc#|ilHt˦퀲Ge].q̜?lU(;iO7eyS07W/"Zxb~4/x0|uPp_]Y_l۵xuvdbnfYne'м6:zmU E>Ve{3U:0!,GasdAo8܇w Fi0;;vmAtz /ԁ[>C2ױbvxw,x yto!XXD(pa+JÂh̶d Kzw$e)z'G`({?v^oNH\V{s?|dį"T0-m(i e"0.N_8B:],B@N4Y ]`Rn 4ؓLbb$J t :EH 6K鲕L̡ ơ8݋ K~͖SGu^h2o=3c$ . ^-EOr;p̜gCTv݄_w:IBE%e`iN`Z$7udI.Xn8.Z.l/}Pd)?QÍذ&'IW4k @$ 2p{D&h.n87%h}\Ѥe.SMvG SdT ĬE<͖eRPVZf ,0~*'(Tt?mcs{Lb9+m<ӤAUn:J"6 AY5 gQU'S&$ >`ak0NXT(]pQ Jx|pco IUh܏쐪45,!Go,+HDbȕciR%oO+zG鄀G8n/y[fjcZ;D}V5{6ͽAOSs n 9#H7T~92J8~PǬ=iֿ#Y3Yhr̺6 'I!kjJyunCuCѥ֘Ѣp!_5$!u;.![uʕ1(ruw^4_7PcΙϰuWǜx˘ۼGܮLXF/ron\2vZvR nol6A|[8Obo{Y]s0]8:^x+UC-I^NqUJ3dдh谙0`:*: :kJb+#4 BDH.[ieV֩d,Z#z3}-?w\4MNy[ n"'i)"9hgUWCosI1~<9E AQD7*Dމ2Ԥ^S!:K @Bz2%}YNC:oz֍~rNwJx\3M+6sU& 1IkǡBUK.cҰqײg@Ls;NΥSMI'is;ưՕ61) Yn9РXMtm,]wkn53 1ys`␪ǁDu(AbeC-G6#P>;o9%No'4"h - *NC'"Yh8#;*IU 'vkm׿,N}ڿ}. 6AרZA3hH~FڤStskA™K|™/bݵE'X䠤MS I .<#Mtsn&Z+ +Nƶ⺏zLؐ(lr7 \ mi:tX'on ('WJy﬐mݵºl.9ӿ<]{M~n4P%|ɪUڳڥMNRWwt/R'wOH KS'eyyr\ V|fKw h|uA-kzH% z*e~5­/!"J3υ+UYvԫm閑{CV)|8t*6O aw]pq\MEq 6[E}0;{Q;]pH<çoFuPw%QH##ĝK<^N"!2i֑!$§ԹvG.ֆOr\}N(0Yg.{^mN D/lX~2+2I-6]:uu+n'bAe;;=:%!*h8FY>urRa Z1(2FLbb$J1z"~ڿI͓)JAFhE|* ca8:QMP̬EV𛹄 Jx$hE}dZǥN< ^^ 4!d h?>_z×VA:+T ~啓t^H(aޘ+&I['G8?Fa1P"<^ OW렸#gbsxtw6Omgu:`㙯(ˢ仝l[l{Fy2߳:0!,GasdC􆳺>S0J o #Ės_P9 *ҩ΋/ySܲ0Hq]6lCvE_Qy&e.*`Wrl Ş/'BY tjLaR⃠>v+!>F~IIwZbsC'!.]}&eݖtFU ̷v%[2\rqrS TV>.Y,> *)ڝRM){+4q^"fhI ]KPФ]T)[u:kڵC:͌,kxDrW i;l, < ģ&$0< *3lfA=Q =kT,q;JGayHI6??~E:E:ub;> h&$ =ɄZ!tm5o^K/WY"baCJ=xce0\#:RH{]*&o6, MO\Ŀ]EM^_jN#JetB hK0&1b ߎSVkEc3k V˛˿89c T< iu{' OKPxeWkw7˶5QK;6am B.`TmDo6qv)+kޛ#\Ħ緧Vu\8 Ϊ[Ro"!rQt4k#ÖgQ超bdbv-Y/00 YC}xW/NbK $9V(A?BqZ,&[Bhܴ̊Q.fe!O2K^-[n2H }l5#%! XfLxoG#2]C>2Ȩ[1Zv&KI{icw(åK7/@{@*SA zT(&?F<f ilM-l.e1[PD|6O(.` Pyn4b *]CBZDV u!+_C2=DGzņcқ)>۠Ii3{dP! ڹMUDE8T%Mn$k{D8]7,ڱ3r^s,!$o*0 t1#YߒyqB&qA8dTS +bVuS(WE$AyYuѷJ%K>F"t)|K ĖOm `EJU1(B d ykEZ!TITDr [Do0_EEz3?gG@= .Ebnk'ivV!DxCZD+Hb0B[_nfX##vXƘHO0:"5A8sDS`ڏ L3-C9:}{XWJ ZCܧvn:x"t3`5bO26Wb<%$LVOqS:w=;xOlx(I-kl=T*XdX*yԆ6Vo$~s~G|9C]1B0rEƞb(2OSmv7GZݱǵ|Tո -w&F5T)%1nT,UbZܖ\#D;L 8tO>Çj6v߁XDL|eoa@uqtD zk܁λBoyxqTb [vDg:hOߒPTu7@Eo2В1sS禍I-~K*fWή hvg3z.gS>ق[+> J{ˋPsQ~qȫ m,-QI<_ϕfr$X?g2 +҃ZcƱGRm}n!ۯ_qjEG=]>3 |:qrd=<* s~T $Zrߗb K~p羉E1}y `t7Ͷ_TB5]`V Qv=];H;z0o)W RohgHZ}<-t˖~ CR9 {7*/W[g X瘹v`ɲ2=SI0ƴϢ6Cw^vus>B2ohfp8}w*96f1ϭ{ qPIBHLYOII^D=)Z:dg-~./NՎ6~^ ;xD7x秫-2*Vn9l{{>&X4?0MEn]^<ϣA6ËmugT{N {c#xWj36ֆ6Mqm>wxVR||CO6A-;3g| #{zIӆ$Zs#SqGڤE@m7 [7Iw` "[/(sZ6L>G}^7cJv=N/]L}K/Gv /e6dwlh-%&AʠyLgW>=Oc'owsܕ;@g1S⭥7ؓ<]<ӗ(Hqgȓ7xOt)v/1c]q|^tRODmp2,= 0TwG Ӊ$u' #8䶓[mš(8"rh&}1;S "혽6S yvwh 7IIڭ 8I'/dɇ$ؙH;f0I(@3hzv{vf ͇C$ӛ;1^:{An8Y!z;]}n8/0Ɨ$Pk$t<#k&$&iwYGwWq)ד5) yu黕 {k{iy4RlIQ TaTpۉBkV~&k A Ħu5r=UreTpYkҏFvM5$O 4;K+Jct je)gP7V+мGY5{2@;amK&VLi ka&u^/7U.oJZX sQnjJV2YxE5_,zoW__w MX5q@kI>4>tEj=;o.^9ptfv3t4l&.FGwy㉌<<|<K)p߇ R*>/+<Fx@.!ɫ֔ o4Cx2Ҳľ=*Q*&uqEw %q ں.?I-)7ܣ QyUq\csƵ\t28!B ! `freq)48qԚ1aw|,١{HblXn "۲tDvJ1 vN^~t2ٽ1;.,XJcXٗ5r\Iǣ|%1rb٥Qbm~xOC7,o膌ʟ|Jauwٶ~rCBj}QF|&jQ37oXPCIYS Hg-c"ٻ޶$Wâեi&30@Iy ڊ.%9vIQMPlӐaKlKuuUuWi `sHD^YoK2fhi*/.xdIӁŘ B5'A "7u<\/)$d5>*ĤL ^Ho M!F6W 8^wQ?!MLhьQYDG7YhIH޲TAEYR*7n"khbdꢳL]I1ge/~6L]yɶ$cںtŶ*x, nd[|Ng]@lpmք)U@ىSנ5%q ǼxZg8q+9oIN.XDar$,GFҏl?"f=YWn¬Q#S@Z$B B k?,lB05!Qf>0&Z"Hs &M5sgho6\ENQ~h7lh'+.dEZZW7AcLևH˔xVѩ&m?RGZ?m7hRꍡ`V?y/`#(DH(ܓ:?GP3AU4QctԫZ=}[؝#ha*Â!_k?j:Ѐ#\\$MPGߌz;3]E4Jchj̻AcdhϗZHϜoaԋ$՛\fw*y U=(=>}\w\3Ylf9gq{Oe)6Ԟe;*ڠ~QqF5eYP6W!*&LW|1d5"IrM<%`R;o2O+neKYWz$;^G!#@GUmuhTywhM*M2$z턠Sj䈶J3YfEfDT\KުÅD܆T6֎7co3lby9?7^XHN$O!p.2o=;E֜ۺT3ۗ[R,_+Lü+xڠ5 >J=o~#ogeKXv~*q(t^LYrCSxN8o2 z~8bT$l])uć'L5 pf C\#kAc2:1k6V cz ΰ1{QlfwWu!e<Űkxzss/1kΔ yIg8m&{kDԆ1QN]R*-x-~6@ x[tZC5c{QqwwxPL&h2195ȿOnz~n'tOSWKr6hn\p1<}vA}#u>s1'|ɹx~7C!àQY}?;Ԩבӯ뛿+o;΄^m0c.8zh$y3I^ncg\cgXJ7FCP$OO= tʠ1)Ã\4z$[N<)^dbث|MA 7y`7|ة)0}.A'OQQ6!o;utb>4rF+ B+ :0  41jݒgFs|̗[ĎUEKBu@V_Y5N4d@B@Cf,^@Vd}#{8Oib|i)\Ub 9><$>D3F{C^VLaYLkJzF^V1_^Y Vy`0lW I>%Q zbv}v(..]3Cߘ8܎r3@nBF`0*〈wg U4&ycd,<4Cv% Iwxmzd g%xGֆ:Řcy#񯈣q/'Pn8ڝ$W\X1!nVsi7|\\ܼVgc<@\t%Ln{KrZ#aC :.!{ ފ ^eQ!{䮴o>!#H_meyc:6΍Fh:H?ָ`Q/u`V|zۚ:,$Rowޙ`Rrܔ\[Xs**}!E`D¤uKXrCu ͤCЕJ[(dͅ}~kuE-RToz5l+7咨4TSsbŧ,էV*{ʖF 0]chղYb2ι[q2M5Y869&B$̡֭c`5D%!bm!{l俱7yHh8N<x"R?kŘ//{qFwDӯS|w\qMQ~v=^HHO8/ Ra/!-v!CHo}nnM$seV^H *hwCҰl@0 Pfn[?_j.6);W}-&X$(Ci d Ôu4>:d.:NW$]򓦓+lBfjҾDG<9iD+ݴ`{/ *2lӛhZ`TAh:I bB]P/x"\6я (^좒뻚2RBLE$u~JKHkJ,UT )*ndl*>ĬSU"|r4B6T @kLUmlF;U\%,}U*ȞM+\m$D,1tIы@9U]61Ic1wXzeꡗsn?̓B}T)6bşhPQ̛'L, EQE~ixW @ՎOYphJ+0R[uzܾY*j5sG?̽Vozsׯ~e ү>fwώn.Ĭ/Um{Iq@97$uћ$Ћc SBe 8l߂;$" .ZZjd]rƚ`'b}jZ-}P\ZjdPdaEpU"yN"B ;4(ۧq{1֘s#6a `Cp0KEGa 1C|, &Cmap 4s[GY4 XViz)v,E|xE`/vUeTZՆsZէ˗(U޷-o\i,dw.62EP,jfTsy[(nlA3^7?TY9d,@~501 ;B` E"&`Euɺܘd1Rm%Qgb)* ㇶ E@Ŋdf;AMLRn CQJN&?jܠ kE9Pg/`R\ZatP0$Aɥ{|.aۼݥUf9VJsMƩ~j&A82F.AL 4Ɖ 0r,2(ȴu} ͕@B[딖-3?gzyӗpSi(lo~_GB( &+ ]ݫ)oV(LGUS:L=|jh=fZpud^<)Zʭ 0R󬐁m2Bd1[#TpUt'Jw[IϲÍF.R_t-rg&оDt9':u]>ڛ|˲bq'jv>lXAd?=VxXi<=Vc^NPJKV@ I.̩\ZT@TG5qi&_Cvf6<%Mf1ʒxtv8̤ZNfg9[O Vgy!b\zAq< _͔9 L>~\&f66MڶЫ9=11ݟ3\4Z;~`hFQ#o?yXtVw|ߐq$UT+Vgng轐5GS3q#E;`s@k(ߏږ e19,R<}D%B)C 8XsA$ɨ!ey8$YGK~PpcojF~}E`K8PNZaJ/GAғDC!W)bXceo^\P+%0͜pVrP D it0ft#opZw dxB ǶޓZ }%r̀U٤*uR% I֢^.&@jt{X%KJha1Ysw)TB%9`.;Y9P0;fdޗ|Zs AT F=yX1v)}5_+p&-J15J~lÄ8f hZ2!X?-GóMWUO|S9Im‗i>a#hk{N-}h|x Xz0juvL5F ,Rd"R& Pt>}0c:'~z=T=&q)T;HiihgU9qOBOчkW(pj 3P'u帔Zg( oLT2KZ*y(3Ok]5RfN;׻OBz;81#="?Q{)]wv~)VgrE9{;B(lՐz$S$󒎻4b/qcGYl;# joaWֆK>= )!:<@oƼ:J-yvbg:F szpV(t<_NBy]T,Ɂ#+TۯwG\<*s?W#_o>z5< q>43W3bSߵe >i\WO;Y3-_o`۫Y;T[r{hSEMMl6.,QmC/HyBح?dqa37c^eF*az.V s^6tB8ظu~.,G-zd<]JaRjآ7[f#s?M[${͘Mewq, ba|v+ɻs:P5 M}TokBj݌y1k_Z}r%On%0'TԅH+4J}ΣXF%oBa'-JϢQ(>3}H!>]1B]Py<ٹ^qiDº"N}]~Ǵ}D*R'PV]lLdbarOݤkU~TK0/Y\\JfFW95qr/P0˰ڃ>Q:;ݪNp]:cK[~7SbbdiO,C^lԦf"<qUxvrS[j7RoEpݥf8.K\#Aå)L\B꫄)cW-[̶Jzoߖ1/&Aert7~tѬiprʭ l-y"GN ]x(jM#o?7@/1d5£k#R>^-7 8ݰs-S0jXU2e6ƲG`sփ9;;UOOTf̋lc]HO:>۟U9M S:gkv1cLX=2(C7QyrU6)墮R/4!c=dnx<㷱=._QԒ[jɖ,AKfUXdݐ`L W:FzPEi Vu6dmc-͠bqP8:PU#*"TIDEŨj0jSltjjzuґU(nLC+͋3EJ9åQ*Qv˧&DH"lb(c|C2,>mD4.@H=4Ը!KIF:P|NAKnϗlN]yhRs|]l4A8('x<@!YGӬ05p&XIQ'T5)4Xǜ̃1ѩB$%m7x;B^!(3))"*|" ta1md sRzW 'R <`<]) \&ppΨppO(ߏsww'Jb8hQ8/E N'/.G?AA綆+G\y2M | ¦\,u чC,iIK3srJZz:EY*r]9kj%tIKwᛢȿUb]}oerUPbbCYY6k_Z!:&)DU:t# jSF9#;8TŚadSDRZ.Tuo-H[0o]ӷqlsq)L:(+eIvشHD_~PƂiWܹ&BjdjQ5&C2X;c拢ؠ`.Y 2I5zFaLu)N@PL5GoCf,(SpM"4Ѻ1"":fbRH}y )X+W wPmUKq5"v!)c;B 6Z;gE/eF(Isx2p1b`lJQC |k.clN-!'ZaeOuqjs>9'>_lUM"+Ÿ1*ЋJ.x#])$.qTVN'b!D zq}qٚ T rR!/IGi9PЁp.n|)U|IFsM~gvrKLy3n\„ï(h]+E:QKP''V(_'L;5DZB(]2(x@[2 #P0d= {K2skӪDHl ɔG0TZCͰiRl,0x~H[6ƀ L3q׀Gx@투ԋ7O.y#4Kj܊Z츧y p$-mߠsFpx>^=$~vU>iů1\ڽwG?dZ)DC߂4C_qs'Wl:;J{j:_Zw^iL{mFf{|̀wBҖeCpB6)3ƔάoQ̳Fl^QXh6(Oyq9,ؚxU-{)Ih]Rj;RPjd)2oT4 %P%vxO;[=~b]ĻA梾o}=,KEa  e-wRŻA5ly1uBeСːO+A cuFFR\|{%T'fߤ| yfjuP#dtaYQZj@I]vPis*ע6aj׉oJrvߩњjxН`NjWb3JKQzq cuha kÜ9J+_I#laE!cA`lo{o%zC5{&m6DI< +M.h2BoGrcՃ(&J_§Vzceg' ֩CAQϸQEKFQy0lu<=ۧdmնywCwc~G~ϴ?liV z>RyVOaF/d*p u BhS, m_8+*sQ|[FIc.x3 f+Yh4VԍcZ6A($Ԡr*9%8O>[BFW]e bƒTkʲˏf]=~qsBf)MXMELU2A"DWG"*6: GSɰK&Y 찘_ɅnJCBA`sj:9|l)qW_\Ėq.B4 VǐKزUd?KQ87yQ@8T g|Hl1dEJx^H V']ԮE.s`gxpX_2Ao7;8AYC`2Ҏ&'99uu3DH5xoWYg 1[7tղNj%v?&kTqqJĻ }16OɌ=E*@\RLh1$ sfOT["/NCUQ AK߿<(c觨x+rrʎIR{sjtBHgcҖ姲ԁs3Z)S$J[+Wc0f^ԍI7,^=<"ku>^H9ׯ(fuͲWtPG" Ts48)^˒r2-׾*.}u=XtśCqtB4n#,9ӻ值/Z8cpvj;鮎Jw˫~UJRk fc4ܔt׎XTƫR춿!\Җ$+$&E+tZXd-ybeNRz+g{4|wTaNEnpOKE1xhCF%U]]T$"N}'W%e)F]KЇJٕ(nE(C)p7dOx@);<@*Bǡ HF+ -'I yZI'Vv>nuu 8ae|xpegq-_~_1oW{vٻ[k ]*4([|q!ǯ;G hn{8o>P~sAk\e0 +R3Q'cݟnz֗_P_b9`Rrͮ$Y5th D,Ȳv.7" I`DFYoK}l%o<^*Z.R{46I>̩K%3G YjBV",ICKN {(JNNn^np > ӡEeŞo @;4>Lwʚ]In"P҂$`N[B 'R IÇA?%]ٔ*CNol>ndzv~U}{Jӿqh4b*TT%#l4Is>1C5n߼gDO4d?>x7 HM355OdcCI/D0% OBs 3O6,N|Cpru߼״1cMx"P2q(1&j3m:yZt@nru=Rjӛz)~}R-޽=&wZq=,E8PwD DIL]ky6Z!U&fBVT.EM ^Њ&b\QWĝSֺqYU B04k59ޅ5C9r=u(ΩuKNWWDơ8떪O_I9|vҧ|21 Iz7X!\pl}R1fe8o̭~ˏvt9˙7zPO1&K"O=)q%tK]RrAK ',jvIa q%.qNq'H/y8)q׮y\j%EX#\,ې<2.̗hC2+ QPAܕ^gQ ҫe3!J}:wø~{_-*WKRrԫf hp[u2yMDt*%1dlLz߃DzԼEb{iP+'ՙO " B/֣y{_zze~[΃o!8*⨠8PQ7 eMDFrө8*Ɲ€RhXbY10UqT^qܸcv(tvjP+-9`# +n~cNڡ%7)9h&&{v>\Uݑ@ܔJ+5g+"@hõ hFE ;qh'@0 XR+uAXJh $OTc:EQ^y?}zTۗ鿩jG/v!U\|-c37_ؠ7i鿸0~+?tb#v >$,Ar^9Y jVA%(Y o@#!?"ѧmBӚyFS8<$nGL}Mmz_ ]@ _Wr(֔O߈j;ٚ UJ$fr~If!ǷvGoywIb\ 6]KCEi4~֩+4PdhJޣ?oNh4}߻u@zyF>]W3?q()5pQSsĨ4"n"[AݩF7j"Ng?iBjl2ǯ{uO_Whp O5ZYy7K鋋.DPbj#3-u!5N);jez_kC5xIZ\FA=zTޤa BJE K[g>(' x3!WBh> !~f ktZhSKeT #+ zWF|E5p5SW(&(o'vx[3ěɎE/B@\ hڍӚ1N_X@fnWCY(j?ˁX!j_uX-Ɂ:pdWm$`Hn@L|;+k۰2lROƀ믔^O{:Fys,+/z{s8 fvћ&GM?W髿>E+Vl"%)k9 |\{JВ78c TM)Yq?BVQ(MG"rSzQ2$ͣ Yj'-7!$]X49\iݢ:έx\Asgnх&jaw-9ԉȹd!QDL98?{Tȑ>|tvTmWDu&v;wJhH9G[ȅT1bM3!t+< H,A $IK,\]A+D5t;A0~GPXHrK0ޣ!T;u rE4_:JE-֍C&?`sϱ4;~fD.eDr]Fp5no(ʛ8k7L}~3oE/)Kx^}{@ĸ~7 6R]Ѩhq*'n\čܗ]57}y( ݙҸБBQ8Eo~G eYAZ:P<]z1sǔz~& %@:PnwI6oN?[f9 . =s`<;@g+ D='Dr*'ZL+}OKSof8mLnReg1\ylk._}z/տ;Df9J-RЁ~}]r[ST?jݞْHoK0 Lbs,ӂ=5n+ ŌT'*4e"=Υ)ҌSs4ѫѫjFD9YXMJsb2Rm +KFxsI212V>E:PSwX MF(>^\&KUxMUxV:UdTߎw޿NNkͤdB JRTV_Y͓jk] #e@upڸ'"C CCоX{'!ܾKQ:ym}c-|@pzb[K?F|nZR%F|ڈs-ĞcE hO8hôQԸ@,Fr'rS: (oFLZ1ri̿ٻu9rk(_Nv&]/\fώ}p6ɹ!^pRŹP@Ω[r(X$p(۳j-˻ZICB̯{zgXp.w)EbW'7i )k@wiu!-m6(r$va>U<ux2 J,f 9c; t!n "$5ij&nZL,~H) ڹϵvx*,޾]&Hxum++  a G%$ cIYn"Ҥb̵Ώӡt7xl3@#µA8AQ'g^fM8FxS@O8FQ};p,(Am&\)1Lf ƫD*K4;Wپv2^>S&EJb6j3.8ba0@x. Kea()w'KC^p:Ba|kD56ac-8Aj4O)seTQn5LJ]vh0It}E#u.2o,Y?btֶZSi9S6g k@ &U`,u;g;S7ꬶX0mp1IjO̓`1B4ˈ֥ͤ@ƻ:3&%Tijט]YdHi!j \#ZrU4ʝ&WӲ\9*`.ŝ>)`\TT#˕10Zӂx`E"Z۾|{VVZ[Tj5;W_s4)>I9MiY) ; ҳhx>:"q:Z,kW j"lmmV,[Ds-Ohɩ!yvD5vQͺUTs#>`!y+zjs^rW/d\T$+!آpRc$+hUFS`*iL ^b phVm E A ,‚ugM!@jWFP~oFT?tē(,)EPDZυ$TQs) ++FGD6*n|+DUc8EZ?ST!B@p\P'9nbjm{n @W=Zm0԰]Q g]Ze[K-$8oodqЀF%xjՠdleZc5u;WQW!NV*n\mT EgЩَIЊe$n/-h?Gx`!'L"_Xz.?}>)s?úʎ,Ogh479e1>鶢Ju5}(IYRfےe'+<˻ t\6c)R-轐9 -4Ϙ 8c,WfO//~ $b'brL00880U88blGn-!`@1k~oLUor*N8gVyν%ozS79_Vwr i5jruѸΕa|U5 83~zz@e:)f\AU4Cf<~o?w~%;/zkFS<מ9-(9rߔۅx[뻱 TvKMѭصd\mup~ي;Y*aG1I^w?0)N;3u"/mmxs:i>\xeһG`5 oƭj 5'.:.{S Z/R+khKK "jOM3a#fl8\nf^UΙۢ}{,2J5ca ELL  EcW4R̀pB(֎oDIH9 'o*A`NonRL**#5GL q uՃ}s_W*ʩpu?7Ss WV wW FRQi?5@<}g2 0^z?_B9[C%YQߡ ty/NmEGc1h}.q|R) Dύz VgՂHb&cAL!"3l)* W:O>0{o386ejPFJCo4"H9M7f|?xʞRPBv, I}W-R k 倉%jk ̻P9&lߧsFoy!{c!{ƛ|$/tľwSF4ӥm"a5BEޝ~u}nؕO})4yeyC5ǘsEtfX6qztmKawvw[ވu[%BjnÝ&4pɆ;-pVB AjdŔ2֙T Rɥ*x| ƄھU~/TvFXHĩU_'-W3"? Vݸ[BcUr08D4sy1qn0H e0n#ܭA}dE5sGJ:[x󳏢Ή=c uX;An?~i60+J6'w^Bd:8:_MhF0U 7.hej_ۈ֦_e}էLmT6dhYyLxvU3~Zฺ3'aZ;N)ݧX^,I7%};\0HD쫼W?L=7\O'5=1^ c3̼̘E63L*PXʃXMiA J&;!z@ hBU~\A cE8"Т5U/Xc`|x" "-<ȻI*ddaqwOu<?&J/r`Р^!+F'p\(2cKXf߾' )DT~,^CTLlr"9_|z?e?$ G6]f[ >t]'.0 `I|\ pWW7G^Ɠ|٥(Š,Qg7ܩ ֜w*0\V|wHݣj#k-ؚ0+Vd_vcVerH-igglE>hZZU>ɘgE0f$!a3El BKk?(kkgKB;]c֜bPL i/`a߭ PIq{41A}||I6P?BusHb fZi>KWK|c.j ~F|1eX($Hlr)#Y©(<$Z 5 rG!QY/AsT J59y.P*Kn+.Ʌ5[`ĎleBeN1BfuFR'LI7 Dbu0Q-Y K ) b(Rm<#{BaG;b#&kҥ,NkjAwc`". 0+[%fIT)``%CG!(j@XdV35%q#Q[ cƃ +:h8Ypl0, ^u~(BPX¥@"Fx A${#2%(R+O8cAv(!Nx~MJ28qj"`IeQ@J zp 'EAZQM ŖkH9AtyS iŢJ=kvMOU-jawӪ6l@Cۑʹ8f45+ {Lg!5q5 F:!n /)*WT˂؍d:I߸L;XtZV(J9ܢƙG ׯ[֓T}mK[ǭ@.UY[wiA{hZ &,J-貌s0י1N+x>yaZ1!2AiR<=M\4Ɍs%eͣ}~t1kE$vf+ܓˬ7/^{vh<6oiMO!mN_#hCdI]uN:} ±7Lg:ҫB܁;uiP O\b(2ROuJ!ʱGEfuD35CgZ'hfYľWkhrUjזAUJu/PXN1RZ:B+=ҩ!spe@3n:ZFMdο"ƊXF ᨈ0]A/̥7GcjK %E;ns$AT3Ng _􉻿cz˯(P* 7Eshv]_Bĭ\O 8:ZzHv%aG*yK[UUhAU\IlߵB0LCϺZHDR{,,`\)Ul8<Q;daI\޶cxax'LV ǂ/$\zm͢T 6\6k-3 Eeh5j+cTۉ7b/ëLFu )s5ZjG-b z5&"8Gkr@iI0&ߚ䎥|xzv)v1҈SR#[[θW.m_ (5y~ٸBpZ}?O `A= š: ?BUA<o:!gгe2 04Vc 2/%Ӷ1x +ŬH:RAyMX^N_}le8o=tFcx)>R=I؟jTmPD&CSm)͛H^!̞(uX+9Sr&'Õڄ#)~ʃh|ʩ!(Kzn FeōV쵭tD֝͞ WUCއTTNa52"aoWMJ-?+Cg*.?TZTH/!P`|YW:. 2{Gn{>my ICpS{]JuA vdjͺ ořiА/\EGo[7`FҺ Ku~Gu;ئd*Ϻ e4ۺ!_>)JD.Z_VUL雋_\\Xw:7~x$ k݇o.Ӈ>Mg YyӇޟy4UY<9ӷ߾t΀Ji36M&o*[U' yRΈEH"}t`PauAcaaDޞL*rGK٘ +*DL`quwv8Z)0YV~hyu]9_)8MUeo[[}mKnYem~A蜧yGgAw5Jv!#Y>-ϱWYZ. + }BLj(I]<24GcX10 ߩΗ",h8o4oLmp<4dmcsa eWu o|L/Q,A(]h+WCm`ũBԫm}hvny|%gܶ8h%F%w\X+~ld9l.P!ÍbfsF rHuj,{=P;EʠQ݈fTu#O&gG 9Y?3_ˎ22'/|hӑl-sJEP_[欞]d2@ ?7e|ReNLdc;:-`D 0djơdLPBSʑ:duAIC+ ZXSJX!F90 ]kAc-jȢ&X݈fՖ! i:We+3ɕ {k6ݯyC|i&^e@iЪC +yj2u 4DŽLfݿ82u&%t'M(aY۴--:fmcXKJVr뒂|>ɑŠoSgqJi!UY>_nxg*iޚiب%(ݧ0 ݈'ΆUМW\^<f3myƒcW =ukMTyHl7?$~.=Z|g$df@E S@ULaMX|YرpӻBV( 焵O@(uϰvM;Krngr7vc KQ0hG\3PJr+# &"5), If 6Ai!B6 14}s‘diF $s\8D_?n-,^2Esb7 =h O J\ #xMn h\tq (=QP74P[#jsK$TpdJEGz w`(v&]cjEX)-Ӑ)r}ۯ3 O). ]Lښ Йz z1x00$)G4 f=JB|.tcD0mLM`i6?S[Kdk$'9OqnH^Lْv'Na(d@jig&%; }ZXLSk_%ט1C HbT^c--uf27v|J LKhBo<[|l}|tv`̙YI"),U~d$phKy(@kϡ X$a&gaW$шzP(L: |D&8ogpd G c䓟qFR/Ū 71nP XMfL-,@Ӹnk*-Z9iz-țfKcuIBsOMuAhd1l^ eU鎭 ct#h8_ij:|ⵤ=5s &4 eB9Ʀ\it=ui=gd:Fwuh`'odj0Y"oPQ֜>E!+q]fY'!Ehg[HwA F̡L 29?5 !R\co#l%_&i\V|TlRVB`7ވ9l.9KDRꈗf!qxi}cWn=3 Do/Vo(6{g'9D%El'+}y:GI U]5U}~;dJHethD'dωfWRXIle+hN^/䥧 YRkі hK=j] }cܫfu'uߪ" H>^fqu&.w55!Sw#*䂥&J7u7BDcVSo$n3Pqo)@Ipi :x$V39X9 5c{B#m,d!jWЛ, !=ɧe!R>mj#*R\?ٻ$UZ~? {ds`4ͱѬ:|+z Iu12h(vWZRT*o8åB+$Adži"Ar8H+c$ &f )iyDM""J&D!p=?LV`^ΟD3o[l(:2p)ePDGRpAGZk0gSz/of >M?bh^K"MMnǣ`<զwAF=[9vP)|q?x uY}߽W_jf R|<*vŲ?Kc](p23ۿv[7-WKqߜK6- P>@򕋨L1@R4ݡvAv:LnMS+j򕋨LD| +CTz|/.kNR?qcZ0 O'Qib}-P#ӧc`}[B kbٚrKt(F$zܭڛsL!:FcX7 uHT=RwsRSiE bVXR]31'$ׄ7CeVO !S)Yxk(8ʈFVp=Vju$X&L ĎSD<A;/;r ښD9 %621C 20R2)v0?(&үd$R-0TbbYP7[8J ) Ы=XEaFnX+ŗݍnp{wẊKzxH:t ʼn1 +rcq5> k@n?a'3˹o <P@V4~Oa3 B6W|:p2\4v37YӚ]*1^V6N7r CEY{Z4NۼBCq{ݴz>cʹEy`hP'csD}fSV%L.nf\mnmnOn~wCq{ZhqF+$OuTSHoiPojg c)3X'A5Rv6Lp1l@[*Oi (̳00zU~XU\B7P`hhu>>~|f .w9+ BK:ٵ-!ğ@d3d ^`'>Ҕ}k/^JQ;냶ߙPHK.{b@//vg:){}uUxD'>DUkbުh'2 VSBZ{5SjCq^e%4,`V_"enDzz$4R[o96ĺo[,RC{ 0"QnEvVo *ʻAVIHxmxFE|"rQnQ qn[M[FnGTY*BO67Sͪ/C]RM ]ĕX$LlELe%DH28Q-B2N!jSO-f=rޥIu~5US4Tg^v<&7?YEV B<( ZN)8^C¸j479kc>w,|oX XY)jT'[W-3 % |d4:Y@=.G\tGjmX.|iⲲBzVyyq5~gXJHR#߹p;USc#N ^YKJ«#Qc׿4`u9wEqqi P3 ׃| <*>)T/Eg?yjGqiG5|H y,3Oknk&EQ:vzk '+=͞lR 5x#pppŌF3?[H<0zê<*[q mK:UZ.\#-2y<iٺ[DooJ[P)TZVd xM޻O; D` gA@r R 77^0"_I9  Cp?h5ZDmp37;r U8xM68>+Y{CMB Qba{1-9QZ(+7Z?=Wvb~ơK_QΔh<}UE ~9C䍈1"5KFB__|\nu{ Vɫ+-gz_έY,<.>Ults[Q̧o^2!)jYO?T#(Js gw S@2YXz?y>Qye4OZ3Zs؝5yJD6W}Ю5sMK@RoLgl~.?%7vٷ4[H` +Ýr(H)E4Z}^L;g/E$Ŏ1J1V83hG$I"a66gv'*T"_F߻Wp3q)OHq~?})D;F-i*J{ZBУ#ΦP۵ppC;Ck{>v=P! SB_vN(74>)?))T,z) Xψj\]aMְޣFRjLj%JEPeXIt}cѡ?^TE4am6W0~@D3U(WAbJL6i_6Eaɚ֌KOt._b y_@i|D@NOACF !x_(],7ȯQǻkg%eHED0C 9(H1  9-""XJGciR*1`1A3.XȎ8ڙ$ڣª%s=c+D"Y%:fH,!BKHJdت@r]kϪރ!]uSKwvyFUE1_4@΍[a`Ř| x 6a:! ~?u KMq+v`;mwVɱQ<5}Q5L( -{ga>d8}0{IcPmA, "rЕoDnГx!NnY,[Uŵ>;N>쀹+  1?sij E;q(N3Jj:(I F=(A+eSF%G5Z;Ibf9}^w~eZh}8St]SKO-s9Rf `zٛ۴*;HTD!+g&I.D >e*/-h)AT:hpӝ!@֢$s}4hTs|Ǜ*.ܵU{ fQ6ߺ7ǕYys[Lx{SG+k|WAՋQL<=>NU|uwqv'q+> v <߆A*Z8+)#M`f9,s-~ې'4yi4%b/C{6=AH•szC*wj3DPVrh) `IDYBie@KM+N1CCw-vJ܍MTRnż<3dzzTKsICkq_K<*5TAѼ'q>H!A Gm~}xduk_Z&F 7e1.(&;D,_tiTRA?sr>|pe-K{0L 4(C=? e~NjçL˞yݛG_"nbW3A^1V?>r_T*O0#ī0ěY}WsÒ/?Akq90HC:8㲱a'heR=kj7_;zuu5$V;.}!5JEaBeg%w}kv5]+6ˀ0gׯ(aM6kۙ{vObj:}:}.6]sfmųG,.JO:,~bVUfR-0KN}<,IY`lKJ*X|b9';汉LHj |8"b G1ơ"b"D1q (p"B% %ƨ 8+JGpu52,8[g9͒m+L*>˶ը)ݍgku;+GyÓK\x-5qDo8&6/5(u;U1K}ܴQG]r۸U Ey%zRKo3̵}6|þt^O0w]Y,ts'_s8+Da ߡ"FHmqdHnLJzt˶gW@;#n}ѰW8t 㶹w[]c5ZP<u?%-s_x\s)I?2i:pd 3r~ Ҝw{lw %!p??OOp?OG.?1RG06( ̢1OlĄP/dy1AO&_/̊cW`hC'27"Kɴ.Ic i1)/ܶ!fČsw6t|C6݆0(愿Xxkvr5 9omM[k y[湹tS滼OYfK,$ L"/v0)„ǔYj!"XjlH,b8RqqD(Η'Gc[GH9OC@iEqmГ\kV`1L>HIų00y`,xq]#@̈:9ީ6}`8@#W$e4Y 1@9Jn<\ǣ"V0v#".QRvRI-|rytW6#ȦFY~4KlRL-(|cD_BY0ɽ3;e%.۬]݃u޼T@Mh@ڥ`yʴeчKf=\$t$DuPC%JJZLJ5Ƅ҈ 0FF1U)D}Lsb!zԝ,؀P|%.3i" ;%D0`S&1(RV: K"dHGXb$u(`dI+Q~?N#ߙSD L*NV2uU7A`"eVDTJ #u&bnTR܇[Ozq(s\< kD6=v Jo~G.1 b0d1KLU!y:X)$Dƈi fKa#[ loǫ^eLԸsUpDw-rpi=Uebxs-%<}ڏ602ɀcrh,ñ •# 4#ebhCSB&ڪk֚.78R=K bz'"_%_XL{ |NA+AÎ fMQ,Y7ZFio\EMqSȑ(C% ZMlE%r]*qbtNw/6{`΀I)fR?5ɯ9=|fv 9aE ؏HE;-bε^O,7"0lŽ!ӏ}'X \K#gRazbf{$GGp{fШH)On0Ί:^E?x'˱; PfM4nqy#BGd^Gb^Nv'@A GqcϻSDxNPw6*(9\q~IajhdnILU0(p$Elu+>yÃulQw7885?S9yIY5c`EĤAԔ_tg!/fwvKB.BPj Huj Ғu? X6&A>"f+iҕ-b$g'}KaךWȝsS` #OɥpA]7pGk2$8.=P3 #} ˹<H<1iʷGMձstlKq؂B~d?"xSRy,s{" ʬ4xY-ĭ4[|dNj0b-JX-blv1AZyapbt*[HJ ˵ō{y^>]U Rz)8D<f{XnWq915`Qrycٽc@+...O[`y'7ՄuJ˩;ؤl`槣i}PaO]]E1K#c'l~)&P\Ʉ(7i O\ Q;: l.8yIe ΨT]g?S﷯l'DBͩqw3M&˩K$-M7$|NS dh(}@7njds`DxLaJEz($ tڪeRys.:w{&HBVE`m?q-@gE/AVHR.Ùe4Zq \6JDuA.5p32<'Ӵ"j:@I̽,Lj5lYҬPO1Ձ[w"C&qul|S=5pS?+SG_;M;uNUM$ YU5Ugz6+^(_ 䡈"~ݠi/YV"KDInDICCi͙sΜt8=D-j!D`hWقqCdut(tI -W#n(.,~9 O$zߗ(q:Ѷͤvef-, WJ*S. fRQ W bOcڅk%!,SCl5&y|4@++cv?aAy`PIoZH}_V? NǂAAm"62 Vi`5s% mdՀcD`WkwX ڎ[Nqk׀!ÏF/XzD+9Ŭ,cjW%Q^9EӪ?eM@L-DZk/Z#?r+CNmTq^m!un~DW 9vdhi9w/NGSxNm¡X |8`L3mѰ_$m &~?wn[*+c_4VC&Қ}qoxRHPCi)@ $FahRBUo ouz[uzҏG7@g  t*&yJJ0&1qe4bA-"KLQf9¯ |ө/02XƠoE?@xŽyj+j?p> vϧk&jZ/p",Ji8 B,ZMk-jd4iL]n53v@AvS/W֤֒eMp2-m B"V˵WX60H'@[^t /:M.8OhgOX -@: D0Sy50(Tp2ⶶR@XZ7yPdz|3wkrw\c t0Wkg@uÌ qÌ$[*P<}CN L`M FeP8 BhRH$Ǖ >t9@L_@BF 'qRODcM6A`:*9oΏӏ,%I6JM5(H4gz2EDD^!,49d<@ |iʜ`˫Ï avK'K~:ZBv_;q#AvM hsbr81yiLđD Sl ť[EigzրWJ>&BbxP3BܦpV:Js-Wpaݽ;c!1 ujEUqr:JKIPȾ z&"Ąbw@J/f*+"Vp*>i>tΤab2/!4l'"ͺ;kBYa^JiK74$h}3=?"&h j\:/BFnl %5N fASLJ@<ţ)Q"sVQZfr#t̤e]\1C]`ݚ$X?6+OޛI[K-yCvD t!!0D{?kd~?3,e6/UܟiB?z׳2O̼]L'zo3㺝 K|v7cg]3=W?+wGW7^KEy3.qnZꍗHZ{EW+|8F0jr;zFә$ccοfl KI_P-gWѫp^E rt~OT]و7l#ð]A/_[^G$.X#ڕG5gfO[;/RKK{Y;EIW"~Vz/+:c^tB&w!.m0rm+[b8kw͐Ԛ ̺> JF!1ǚԂ_@.َ=lvvv|Si\]&rD:sl/$Wh۽̚Λ;8q4wXƿtN\/>;>z?6Ҟ u txto_RGt[T6r]U4MׂP6Aa%LLe"10&VBqgBBERJw.xZcEDj!e5%ȤHH!Z*d2NvhwbbUl2l$&&b,(d/+1Q$'4(`,qpj )Q* 7F&D SȉfH@#J8.Rd2%DIes.-bJkM$F@Jx40L,3řJ@*S=3mBb5B IYf!dLj41>)JIPrFdy:p@j@w(5{Q(F2 XJaW4F'ePNT3 BT)V04" sR\@JQ+hQH ,$k5=RH e"rH 8&8 Mz+( V_`aH(PZ֗*)d& *Td41 B+Zl1!$>aYy7kxv($!X+n=5 ! AHHL(V1$:67*KvNk^ $ (sh UVx#5umk]-|AZ`CHM!9iȈsa | h Vpc͌Kx#m mֹC0{<[$G׷?>-@Ӎ>%UQSB+ glŪ|P=]Fg?d)e1#RX$X\}d{CR7Iel\AZgi,1.'#4|f"ً+l}UrZԛߎvI enϫJǐjZԇ \q{$;J$[ޔ8\=r 1jڴ?#/Ǣi-J6ƒ>(oyC#/OW/̈v&OHuZ%H~O'5I4x7Ve!1L';R&zC,oFfiG}3m^4i%Li 8cxy7.䛳.^B/P3Q'~)0K ;Kf|nJwq c#>x42޹j:^h7W>h9x_ہXao  tbP2PiE/~ Z|"%[l*?[rgr< 2Z?\_E^DSWi,lӋ;tV@ss:`48u8$+#ceX`pnqMsի;b58J8zzJ7G;}8 Gr7oWw{TqbPRف=jě՝@8GEr78GOOJ|ul5g(Gϗ܊6gl%_Xd!;)?Ʌ_W q mzsܞԿJsTj:(8hL/D9Y(N9Z8RNN"B~>sjBP@ںtZi8NNksnӲ֥dđgkm#$ iVHtFĺޚ-< "b jzm bkKB&,c V:%=\?D%jK? BWӁ%8MJf/xe;צεsmjmj/Г(6ҤC!P)&Y $JrHҊ+CD 46n؋̐ySp}Lfa@n"׊x6 v䳉di5f?Oldõtv?MH@VrҔ 꽺Ǟۄv%o>wkՠ}ܘ7 =_J@QX"IBiē2fI%rx"A<ˆkt-sW_巻Jv( k8s|tYD |`2ݱ=ܘ7!ŗ &&C!JAReVO+4aQVkK 6L`XxICINސ|3/e\Sڡ 0CDjAIcWImOD% eIƴ=_fĀ,M1d )*7GѦ3rYMQJEggsd]^1oe6}k g <aP%AJS*)4 WPFJ48ˤ}bE>q+Ɇ  L1v^]A7wA073͝jE5zˣucǔJVD/y;gffx?VQ=MW+pE_nQ’?Q3C0eq2JucV@ko8Èttt6XsNG4]sgOlvM;`g9`7g/8f=6m)#15Sk(-Ss)` !JZH8ȔQ I PIHtj'ͧNu(rb$p`螲`aY-u`ZJs^[@2Tb&0P3`DcŚ,),P-P6zٍzdv|OMi~u(&;],M=K|7^RD5_ر"!*BDDD)">`t6_n]a{32Kld)ww7׿]| 0U;+/nN}Z;T ”sҲt_PXJU J,!Xd', E:Ftk tZ43nuX37 ҴFN[ cbV[0ޱ[ M4Ȧԑ+{TS.^_UK\9pb=).7L庚J5zY]%(L؀`ȨD`)IB<ܸRH*Ω(B$WH ݌I4"e7eH IHx{@.FD#Fc0+aX f! 0`"abhpHX+"PJ˝"s/\jd 6ow|Rn%GܛE3= gNo^5%Ζc0O 53wo1}PxS e+1Q=POGXL(Ռ']`; ڡ[ 2t*-NKݙϭBٔ_$*-[DPNUЭ)'C`F}ѭ y&dSRv~5nvEیeǨ:=U0ޱ[ M4Ȧ>rQhT3^6Uv*&cTզ%K}D`!DlJy38Ax}HL`cxI-baSHI8֤#X$U)FTKAܜ$fɔ$1ĉXITP0wBv֥-PKU_tzgOWI{2<1}pyOh$jD[`t2ϋ:5bsx.fy>]SV9 ]uOٺtbs僓Mᱡ6Ig35TzN;HxS ;KFNKgpA̸G<㌜zIK8a#A1r[@kFW٪[_`p%+NڐԠ!wN8cM{~xfÛ]'%vUJPI?+<_7RR S3G-VT,߈ZhZNĮZxR'qPJKa\ *둆bAz,P/mG91p.f+Ӫ*hnB-%3ۏ.tFbuzQc' ֝G%CZTyq-*3ZT"YKZTx.V_[quxg:fĭ\uZ%@d3ݡHU: J~nZiQ tcBZ9O+zFRT􌬩[q'.3/Mm6 [In ȼ_ni&lx/n|Lh/++!Qb4]w(o)8A:q=!&ϹpBU)8nLspwM$]LRM'y [(d :U>{Ín1 Tީ L#;vCu8pR >>d4/t2|?;].U~}J{0G1Ģpr6u~&P& TD&2BSC=97;x-ICuO"BNhN̽Zw_|H<sL,Jf㧧y?|/)D #*jFuK z(Ч+Fٌ>_bSJ#T\q1Sl /:`fe "46-(laz\>޵-р y:59?\RMu)x_ERM[哕S ]\obC̈́Ŕ Rq*qݮ, >&IEbͳ 2)E{KSx`Lr%)T8TIZ%8M4)3+]>m4'$U6K+嘰v˵;T$aN!O|. Q5޿~zv')jL\=֧9_X %f^Mm; _ ""B$*c>ˆ|,wb2_.4לQLp72pv@!8EHr::Pn,&K[*1ayVTS4+WNNYv fR֟YDv1[ۜIEo'OQY™,va%>hv8Ozx=ćG&@lTi]aQ ~!' >}*>8JAO'7PpD@ d 5oedAk*er̃*:޼6T~7,Ƭ;8nrw33knJJ4QR1fIq"'2Rκiwf/yCXUƘ$D0 aZXa,$$,6#"16ņ3SlT0os/(:ZDhy}4ՠ'C_DK+$t'6|1EBKnTQ)% ջsTf. #$šƌn,M1C 8I`\1%E,ֱsqKMwGsƥY6L?-:'Pb? @!"P՗s$xoxr7r*ݿdZ*ϑ) eP"F` ("0Xs^SRj9M;eBPt3 Jq\ ]ʖifxp y&zM BhdxA x6a}A(a{a|B|#l'6߻:WiozAOn nF^.iI>u  3t6Kx]1>ŀO!+Z;-hޜn 1cV d2 Fܐgk޺W5 $Mx'o~[6=o]z{;Z 8+0Ōddik#*TUTۂ*\膒vU!^&Cr^O}_O'y'}C+9ݺ|X^LfÛ y\Q u*L 0K \L_zZM9dKV St3;'I>E׫|3'sKџNC۟ ӛ/$yjNӒ~8 !goKc_G3`2n, ߉؏:d75l;믗I֯Ӹz3%Sr#@]qH^tgL5k|1C42YIFS^3Z;57IlFD9|FǨ5T?x3b#p% m(&NW= qۧk{O&1FK1gښ۸_K6uv$/î:9kW `$&%vT68f$:tݍڜZ<,ywfc̏^a+fc*9?O?*㗊\ eU3/2پ'SR J\C)j2o_<+ cry%ыҶU^I>Qv[xZ(%k'{zktҮK]W#''qk*e1tblξfrgE(Ϙ_S!X6} ,`9;/-D[uL $婀]0۹p@%;`7՚u[yi>]A9Zqe!8Ύx NB3 ش1;BQp rN(D`aiCHUxN6|2HKv%ФdekX<qIA #Sn?Ò50nk\?ћ70߃L\wnCS_w.?ek/gC_D5DݩC&a%|7'k<-䫒ܬ*fЬ4We$ʼE^XN-,[MQpDF|9LJOjKT 4JkTHlA֠@r$t&52"z2Θ$ hP8f3\8WQbUAby)(h%1 -cP?\?Jv:ק"@rTy cUL2BqЩd^ IaDgQ\3rG%~]RJTAc0=L 6= ;]_ʸBI)e^ b„?v2Xlo yې yېf/FAeBpa,Wp)f9YxAx 9VsBXkc+0Zw#| ]%# C,Bݤ/G!(L{_sO+!"EY\ жȃ%D0t9C0Mm0{[5UaFN45PrdeQ $ B@+ m<ւcha;P-a;9"(,nV.4c}YA)ѡ9f;r]zJ2V!"s$*}FdBd5ZD !Ry't7Qp`eҴ\Ÿ:j_mVUKJ J0ҍaIE)X+xQsBW&[4׈pڪ˂ź]XH+e$ k¬»=${d9v"iuܼap|^9-6h:W y8zS~ڮs2+*]ݝPR5S!Ǧ;}Gr7LI9}pCLUHi}w8@->{/k`%t7aSn j< /ԻgfJn:71146Bj8pYf 7shoVl@w+So' ^1Eɶc8a Bgp$CyNB_2e*K2ZY%#r{jy+*̘vf20kˎwwQ@mi܉h6sE(@m^x`z+&l_l8s+y:N1񏶼J-iY%0c~XfׇЂCND 4Ҩ2M_dCt$b܆H-:?{b;-ϹDx q7k\[ U\e1ԸzM3†;a?җse3lq$,הKcPQTc!fe<-='j*yR#&cD6Z>#bƢ53@I{;>oX+J5}{Fі1^\J\ĩ^U xFM` ;PlĆWGK NSI5B#NT۩]^HLZU P2ƭ79VNa< J-б3XJI2jڶߋT^xrj,q; .+E,-;y!LgF+iB4 GJ/M+&<}y=0zXfP&I;e, )%9^j~V3oI2GyF8ӎKID&$pڱ*|j$`DϿ§=X_NeBTg<3m8cqXVX,u|cT iI#9'yTd%E6L(}ϊǿ[F>Vtv\`2^B.CI޴nvaPΜLF} l3 >)˜N(Վ*MqͳǓ 1Csh$U:c*ưWC>'V,JoXrp/F#s`&%ߊbF-9]e4wj2{f\|_F!!@ F(>9ʨ+iuXIW V*t>pd-q39`Y(i=Zkj6\0,+;D*> @5,> %qz- hn/ nzTO`Ѣ"-(b;ݏvOX5Hr6oW@B!?RV=w܍B+=GGБTbR&sp|%ϙ^sXm5njg6$K1j"T)$2y`ѹQ y<,3CI5]qG EBR7.Wʧt~Rm~,PPpkuyXc]^umO5{U2kEθ7+hH6GfknH%-FlJ|rlH8P/T6B~ogJ]sAWn@4%eKQn]Yo \NXϹ=h],Z4ߐr}L)%ѳs 2^43;O72YJ+q^s'XIk--u. ]ܝ6%c{OJָ; I8nX~_Ҍuf Q2i0{)3UcSWA?Qq-c0)f;O)/~fv𹿹Y; y?)h:ǡϦX٣#zS.=E~u fp|M[dwݿLoOւHqbN 5( 1EO '@Kujbiيi| G4}qe]DQlEq顃40(`P-w!3sn9N<8u׽ևSܗM7"VYtmcJ?FC2gW\#е|D=d/?/Ƙ46nJ&xnL{3wkv#ʆJm qcaz?((f$W"*|=Ӎ!G_$f<@WO UQ ` l6 hJO OԟDS)גᶾ&x9]GAXU.QQk ۱l>NǦҡa,C%ZΔ=-R-5 a >ܔf4 Lat@OSO/攡cpٻVnWXzImjɋJ67*6%*$uO1CCr(a8i 5bi͢Nd޿t(A$rO\ 3%tA\,儁'# JFj<sNQ{0jB jaʉ̝ckx c9jD#*`]@`kmoF j-3^'&HS$,i UK RIRÇE 6,Jy[EdFj b‡cS k3ŒFZ;Au/KK0 YPh +`) Vg6h,a)Rː9 RX4x.+Ed?~$ ӑmߐMZ=H~6.I3nlЫP4\=U!WWw62r\V"bW`<s'~8ln?\>l1mӳh2Imm5G2pe)auY3&-AqK8"] #㎦ȃB+ f^9(4K#/ Yg9qN+KpXE_&Fl.++\!!\MG2#68h ^ guOwjc^@B P0.O) 0f>%־ QEd^Z Z6HjNBSF{00\njP$Ir<ŏ"RPd#F1 9'L5j Oc rXbDã R>L3DC~-*'.W=H9;H4 8I$(`9g{(<@K)v29ePbH8.FAe߹V'צ ,Kt9DĔXvy^1Ψ]zRr/t<zᷯ';rmg߫kW~ gTR5;|tB+bHzb0?iy@AÙP(N0@-p$b7A 6K;~&{ZWgX^:Ko-S^ ]SJۻҜ- %H#ݝ`(0EـI166_}KtLhƌ#9ϰu(8A$;|+ޜ'4S{s>Ex$|-;DSq _6LIS͞E,W_&VeK9Θ5*ӆ,xgPPJj4[ϋpO>OoMfkSXOzc b !wf(RoBdN-p+[Cjt=4} =8|,̷%Tx{.(GO3p"1Ddv.\ xDv}$7nqlj &%6-AabǸi4CX0ldн+%u['{7Awͦ`1baaaz䥴zc%a$T[σP^j&y7'/uj(sRPvaV׊q̤+ #1r( KT3bC(XQh HJlM5\c?yDA_@#(eDi)0?  4J.ު杔b Sb S&$ag8۠/ws4"հ]-7.?2+\g}po$ yW a'o}XOhn堡CZ*AJUzv{_Mޔa毤i)x*JӦSޛ@ ɻ8ۻJRku[-`-VkpD:AkXV!Ё3ڄ\h,!FZ*Rd@Wex$][~ Ͻz5AOxR 佂Qu%lX zKod{qpVWWb&!Bkg|\xX:vOu͟.-&^_my:/ú V 8 ){v\n9ygUeX"XF.^qJц~ -hG7mJ J\kM>u7_B:nO% ( K"k.d~7CIE;IMyi*~waMz  v.YOa3P.+C rLP,I*)v0jUm**1Z<Ox}qUK9fGm:;UqgpFwm=̙k'a{a ȱjkE)҃w8[-̅!'g2cvKoEZE %a=Caxtrk~OCg&x .+Y}(jqta _K_HDElφWgi R2.q`}oFťx\V0S\E7z1}1jtYR!.ɮ91ɱh&y7@G?XI:4N!RV}D1IJ0%G7kW_Zc]&m(7LfjٶY_xNѩ*a~<F]gXM4sA%6Šk4)(t1+h'ǧ~M7qTn5_f 1re ʩ 9U+F1%2hݜh(>דobRQж6ʏ!vl60i[7~k&T >~qÅIcrǖ2콶8Hʘ3^rAN\xL453N3[ ;buYv| D}g1@'@:OG'ax!M|LZiDūZ<~T"ANWtJC\ʳH#Tٌu#)AsFm1eJrlB9~. x\?UeױU9,qʎ+kC a] 1rD{Ew֒-]DaHVJfoD[~SfQZkxxCc(d,W!Km3y37mjYlن%QJt6}g?efLqٗS0W5̈0漲*\!:,1* mʀ ;`2pV&чi3"DHQ/] _{n,sfLw;=0WM큏7qu'BGI5TQ'ǯS*Jǃx_,4Z' UߦєiJ)ǪYT[N!*uU)V brs&}_>rZH)?0J7uլv6|^5ָX/Y}nSA0 ~RdAVsNf552&w?uw$Ptf3=$.ĸMQ;٩i p*{ ]f`t8s^ghiM#7o;]X?.6%0 s54mP83O_:HT2U[HpU1=Ȫy7'4?Ih0 yo`6w6`Js<"Po< AI-.*]sw؅$Mꧩ-bo|sy P6K龛Ypnk++: Ʌ>EoRf&x-)3Ҫc-,i`"D jf˙ 1((>ּ> O10Z}bݡI3U&_5~%jQFhh GJ2pVID7Eނ/lvTE[inslBV\)@#k 5' >Dh`0u0$, FH )$8 +"oOX2h# 3%+ԼD*x|buF,5B`$XAh1`dZ,5 @+LT0a3KUP$_ǧhT`g#)t[ Oo*|7/>NR?{Jp^6`$`ͶHYO,٭d>U*f("U#QQEy0Yu+P )g; %oiD]WRB|,HtP:al{H#Ŧ#_T5/˗Q0sQ_+6H{\7!tX Ks-JB*!'YI͍Lg4QT~eaKXz։|* fe(y:{Gݟ DݲKY=Ȳ.F+WZ1&г4͇|aOp, $\{^BoJzSA8\`P #xo_^c>.KN 2<x]>="ML%H> :&}@TlR*$BK)!3^}%@v젢z|a YZ6rbKK[esl>~ZX>~7[k/tvq}lE:C"=˞g)ցԩ^qF-( +U,!,gUNqM+L% eR?#RuUe\v+--EQ Cb ] FVFh#f`ʝ nei[_~Q4p>}d.9R>gd:ZtKnRtV5<&EF+>2m/6%ߦ1o/юY3C:9c49DNp:2P;RS(kQL X-@;F.|_Ia*Ql"rCCI08eƂLtrv&+H{֔ERvb~@D47 1Щ܏ d%_#%Y̅/0_鹏mFҭ|59tp|9 $ÜDX 'yNЅ(OJf;9gmѵ->Z6-gW5SLh'ȃ"Y@kmLG8?^'| Z 1jQ??-JG kAm%fj9)Ҭ'cafLR6ewWl ~OI[ls1-juVNDWVly|xry*l7d`p㞏P]EoZ=|R `ĸCC'2G~Ci6qtq21;UF`CvBoQsIE@O*^ HI8dmrV>UPѶDpe:e ;^+Έ4mJ+ 7h3KQ_ X>R> [-veP qK^3KZ%@;|(4^ )]ۤ' Dv)qzmfAs-˿رH.5M.9nE94ph^/Q'}="rk7|$GA 1l_pE乽Pزݹ;KaydJ_9|V=[U :FAYҕJ)t#@9>^9^|1ӌEjS/>8?CdxBd^]<>yAmGĥlPx3FH"诤f.B~{yH>_8 =Zzq{?g $JCBm^Cm{gOʜa[3lBlgn/t$2ޫhY(3kD&ZLwk{QhXg> +Ge&+1tqRpDD͘6&; g';dna$Q|q|M@w[ۆ: 8@L2W:o5HB ^0C>R8\řXbΥ:LhDCK=4CG5'2AYO|E a`flG@r| xd>%^4> j ƄKZ_4;O{Nџ^]yR"y?~ji#b<Ǣz;\tjS\)]ڔ.Wݔ.p%('VY*5N9GM]i!+-T72,q2Hߑ~{γ2S(F{8m*g\{ټu:}݌ngǟg$NR-6|d2uS%0a#2ڧ@Jk@Kέ,AhWJҦ!)UKT*ihqDwĉ Z@#KKS3VWYSc TQT5NEgdV;Z QCYZXE@Ѧ4늱&m+}r)4ivS>EdMi "Lx\V///ys!Cxs(u^"oQ <>a\16q01(ha\"OB*k)&%X YZR) [ ([ i, ?^P/B/n?.:pTM{﷊ۇ{Uke (kB n+*mjS*ǭ@/SU.9 B(45qs7;%P z;vߓ 8S%+קG{RR;22|篽\</GEHhG>1Yu>9z'LE{cEd^,-UkyZ{Zޫ PB躪ijS1K,m_$78Q5%G[*%hg3GhGA?B;_sJ1єjy^ƖD 8hڊT1k2U)f3_ZNP4jcK?ęB4VJGd]q7% ;AbCJVƔAq߾-uU>W6Kq/(#Q-nJ ieSkI)Q(C~Elp"[Q娢kt>}]&e|nRInw=[~~u{/yB~ϟm|JJFӷb?͈&z:oᇉ{?}& Šъ;Ohj/ʷ]5F#G3#mVݕٟ^ͩ18ghDך3c3F q>?<`S5eb9NXt((ds n){r`=%~3qkP"9k]hU PtqD0Km'T զ`~[_8i$r!hD>愨y 8nk) np]^S!-n$P\> diQU -r u2*^LNQC-vOC<^1(Od(@'1W1C9ÉcjZ\Q [X&hFs Q`5U'*mR*Ѿo-b>vAȴEq_0 LTRo2u뜀EK- pCP 1hbu&KRlTm1tNC/C}R0Q.wOA <íF)( TQS+%6Z/=;[@U8Ӹ2]8RT|\vTOAFjwh%w_ t:\6ZSv 9+SC؀aF{mBp!ɇ-K&E4TQ*n͖Y ߻9ewbdDz F_Bz݈F#,yvӯG/tV?{LKzCҺw (G LߍiGgMi^ Cs)m!Ԁў]fc_z-hhH?T4Geq{]9yfC/D!GWzW̬v5sԇ`8d7 [k,q Dvz@E~Bhw|)= %J c+W旓&E#hf7Jn3a{g Ðvڣtss_W;nVRG}Ms};k4zO9[N导V_„^q=0}Si??|77*hS4){QG5 Tj$%jpT-uQ5F hE#%Xᾌ9WVHZ;TDV45ji[QN{qkU^yxiJ[ZfO씛m_Tg#0^pS׉ap4LiZA"}ДGG|Yee*U) D.q\1GBJ6 GLU  ~0$q5*^o_8>t"d< B29䛎T&;z :&XhD_Rbz#8¾}[<=_>+Lc-~"BxW>Moծ)fr6=HRF! !Dh}4=ӝ"aRzc6M V3(Ъ(5aDU7kW^qˣ?eŊC81#A2 ~=O_ GQ ۣ  #c5qE:͞[g ҉s7-IE?> 1ykeä$L5$ (v+[Ȼ\U i_//{bhǻݴOw@Y|ה?ys0Ʌod -)-brxU@YEO9QfL4Q}.u|z{ H,!|d0 ng3^(t>Gʶ{N@It(cu}Yb& [VFg &GvIy7A(Kwof[?KL Om\!iI5DOqSp3x&ޕ57n$ˮc캫Rzrn;|̼xVQgjI&) "A l$PǗwUVs8cLLegQzc54j~LttR{>&#1TM"ߣx'M% ygKYù /\qLd:I/h *}& h$zN.$y)8Lͪߖ(jZ'g־ FJ"B oO>ykDz9!jXOɊъ ESf&hOi 7H.Ԣ~S6a3:H_G:5ϊ)Op`Xs w;33=tgUT>y#r뿲RDW*DMl;b`<71Nm#5ӖRKrZY=jZ_ZsڣFռA)E!)4t/r@B%}ܧ\JAia6 5btڏk?I8[<۝IV._BHM~"Sf*hzw=Oy5P t1Fc Dph>r עmɫhw߾Qa*kz?ξy{v~f!RX!P +$~t>Mk'(p?,vvns;$IaKc4QJA(^P{YYėXcA}gSaEտ/Fi2͗qҢ3"S}_Mn&.[zt:Sx|{oNEFڹz߼kڔ0A^ c= [ujV0>$Who쇥 bWvrpQӼ]_V>dT/oryǏGI:neG.s8rÑj8lO'BQ)E.EWFie52iMF;A#M&R$?l~(?'M& #OG];:F(# 1 X!>GHJNWI w"ώr9 W1P]xEh9Yʫ,PSd$P H!؝Jatt犏8u-e^E6ByV`g cҪ"iZU6h #t)A>p $*VG_bcȭK[*E-DMơ!O12)Sx GKN"Z TIw^kI iŜ&9]9SH+q>{By6qeх!hM %%%3+ΊV P׎8jdJN"MG,dK 1G)9cIXđI= *Uݰ]S {/F/Φ('8,oAgg]0I8I;& Vm'1@ĘS 4~TL)e;FRnP2Dpwj-5C5arb*jz[[u)~?%2̠?am9|R]mN0Ceh Wc1hf ۑ`[צAyt=$@_MURK Ԓ2#%Q΃SZA)L55 YY MuYE6(B ~:KƕY[umj.@'YN(AZg9(N9՞eDܰJ2&,\c9Mz,'0xע %ְP2hk{Mn%vg9}[ÍzSd&IP" -O~fhɚ}y'DR,{"5z)\PZpGi- E[{M. *ܧQ<龜1i8 ?mP28.uT\j 0ZrYέkl>z˚mVqŎ<&LR S{ 7!rf>qmp@ ":fO5k;d0hࡿ6G8J Ȕ )A*|*6pTS :-IɱR4Q1]o.VcDǗ`IE1 [g+o5mr}f8W`&zX"z\˸nR@F}s_Izm7<0jfh:ݷVXix/Xc,=l9gM̌l$f1=O|,?&f^=`Gv3q&XLl> S)/iOe=C+SPx*tb1JZ~>㑑\陸/yVQA4z1O+DZKv|?(vwL/J>k604Ju -q* æ 0ǽ|m a]qCyC-j^ 8_*y)5GJo%w_3PR-7]P Wl1xӼH-V;J-f1bj83xtּf#7qȐm7Gv|r___w.K5f:9dܥM[|6)~Ly|u;U.i2>BIMrBD )2Q-8maîZ*M2 ׫e|Ho!z0Ƚ]$P".޼{#:8`5r0l/6| __!qPmmbم` :0պiWX?Wbȡ))HʕNÕ_"#=S Cl#7wznK]<7]Z7y#Y;I e(Z 1kM;}.l'rhl<ݠ٤_ M!F+!s -1Cc]!έz%ۛm(\yQ-jCǯ~36(8vJd]|5( ywP8ONT.3 !2t2!FFJ=u!:%sD൒2A`C-yeQ:'~?~;oQ B4ے)F;NoeXd`׊Y].Tehs!{V)G9hk7ĽYv&cHYnB5PH5C6_y[#h d rNBDluC~r4Lp: AQut. a^FX>]TT ij~2; ˞V{o?,Շs:Y;b3c{&x>P cg]+y iWzLdL oN'͕a/ kBɢ &MT6=Am>`vϭ_)&hw] vP՝έXy$lpܒW?hL|Z*÷Ĺ gKcuFW>dp{rMn ἡ\)K5џظ/Wr_Yܗk}1a֎z9iOCx*2r`T|Wydt*;s?w­ ģ9lbV={ըl}dr#$zlɹxzԬFO)(ʈV(ڇ_M4&E2bnϾ%.rJk"eoUAqQ7\^:(ZFǕAA!QjOkܽ6[͏V t)~2#=#ԹI+r-W3vBOFO2H*Fʝ>%`, Nb FM#ZC&vD*Ӱ3ѻZUOeƎV2vwVT#KUKrg48Zf䍖(YO(tq:8iVh;_M--byBpgWh4f&),# Y.oJm6~bL4瓖|>?^҈Tmł{eCWH(YtwDWMRstm.mGڰ@0clfNKލlo0G_ʩ8D]˧SZl> }N;lYVFnuƭ!ޕ$"ewg"6ec!V[ ߬"%/1X1f<&Y}q2Լ9 7Sir35}YS~]S*J/@Xa?֗|%dUʯ$M_ _֒J:J;٩(强~3~s~h6//:}jP4 jqp("gu gBV!zClݶfIt01W/5&?/3F+8¨cک Rpֵ9;YPٷn@BpDt"0BDԖl0Lx(J@P@T^QHJu#+-aJi@ͬ6VH5-X;j)D(> 0*%f@0Bh=1) hcX{:a0TPVE+ep @y,-q"xW (zVAW7zU@2$2${SŊ/V߾pK SޒJ2 eߓ=~~Y¼zĻ>Gnw"Eqơ_~~wygŊo/㧸\ M=_aL$JYf VZ\,7̫i'̇y_.}y?û cdiq^>ſysULhll `֩Bk ¢Z;V'nTԹrMބ$9uP|(EdXij)9} Cy9  c&H'2L0J` eK'=)KT8qxc PYB,if/$Tr e+Dw`pĽo\џA/5{ikgXԛD* 4uv֌O >)Tt)K7 731:IP|8F%Bk?3Q õm0- |6 =3 Q!Iy(et@~)c` 9v\-^} ڙoͻPI_KHS)Ǭ~,oRٴ!( xjj2duJc_LX'#SM1g8ӌ>ZIaHy©ڀ`Ry!A1eT\Sr=EĹħ8g50Nx9ORTI $0x4X?hXS:mw\t*V(# X?e!"-9}Mqs{<~`l h, }Dj죲2|T'D ( N!W"fF"@~OyGFzF9iĜ8FqPq^S`IaN5\h;TWގKm'{ kOĂE%ã)M'7Pd̻Ybw[] &&-nj'HAf*]YWLGM5#E 'ѫ qhDo[cP!m ZRD׀qqCJD]M8R:)6W) *p >mm5JuS/Ft4g00WG  ߂A_hdiO`Fs@H @ Kqj CWٛ_YS/{gO/{[O zeyarny|eyQղΝc(}>DL$2 rƔi *bai3 uon+oepw@㼍ֱ蚀B8ϫ1CCmEL Քa" 1N5e.- k:flن s<2*unqͯUH*pPL?~>2Ŀw!\4']u'֞WM59;evϯi(?iʶP][uƒp̔A|h: G;(Ƞ!0f#52gņ$1琠 6'{lѩ7/}gۏ"vTl%9(})(I%su2V/ˬ <BN;,2jZy@2Fx}7%FvJL,吪,*jݷZ<@mim{>7y{~CwTZgXaQ!젤s,JvFfR;ql0~m[83zUS%cdQT4{s1v? ~]RԲ9/pv'Qvi:lP 6i^+pLo#R8Ar6",v 5IqwmƠ⛎?GMwR5l05uC_tc3^A٧*J\O뱞Hnt U,SDN#Um1 빯3Dq{E_6 C>E&Ӣǂ)?NVAf{w0-Iiza?tG 󡞥0WuacQL~a{[O5_Aߺazw ƳG 1~U'=+nM#e|c C$vH9XSp0M>žwrkQYLx??vկ)UR{Aqy"/oQCqq[F z}xƾݩ~a]Y=@10 G??qF 7(eei. g" =/5ޢpCQܣxf>El>yBo``CnU/y~˫\LSl~6v冓x٣hb^Oc&auŲWw#*Gpf (8V2qZ'}ky$yyʺMѓGQ ųD~H50# /ظ:E6qL>s|Pz}ai~#$Y9.,*6z-ͰS HML}JfO rx񟀿9b-bwm^0.-M/7ƾ&jǒEJLɒEqI_hg> &7 /_k޿nz>蛫/Rrsm&jPޏ m{-͉V[)ۉ`nuԭ* >AfYY9s %eQn`rtbzWR^AU]7!}PfI؝M.;o_$}{'M̟_&^ޟ5U6[Ͱ+{4 7RIM}ԇy*1a0Y>~6=RJݹsP`F|\QrӀl`; ߥٽ=q`\ԚԯpR,!\{4a{INZUПq84ok6 >hڮ?h T= q*]5 <3=DF)oV42 zr;(m IimEV ߱w0 ؽ-l`(DxmkaV!ӷ*u;^Y, #r(vbPRZߒC3^S2L  looFx 03OaBV77foOA *Luxvm)N^*ff^93S[ˬTHuΰ56=1;mUF#Jz? rZhvx_~poWz29in`ON H:/`̞v `tŕR(Jn* 3:t3Ŧ V pr K0SNK6]|@'Q*`&5X:Ese0 砜?i`Q/y8}d=C#"J3QC<#zg)8`(AN蕴H j0\ӽmgjnۯ_`gZ"˒J ?겅/c 6,dH1Qr%S~-:Lf{Y ⩍9,YF\hD_ࢋ82<B9z6,fbF98FMPDkPpk0G.FH D1IA8O"`o&Q%XWfr1_^O.]@}fL)bflゟdg'$3ZJ^ .JwggGڅ#xf<9(_c5XgIAt> e:=ug`GMG*(aAȨRXAh+N"V$&.DYd_P"XdlnjJ^L#K:ZʓFx->~5qlpmwg?:4s]O&O':UO+Z_.,1Pճ;ߪ[7Uq^7,X)7hϓ.Ç۪Bz",#QHA{0S >bɘdő&LAT6J D ]连?0T.;!W\:VFi¯`';gp>%mg|]_NGI҂weY *xW3BtiH)W֩OSMP56 |lG)uq=m.x:eƯ4E 1{?i]ܞӭ=[@ ƛL22D*j֙ WKJ_;~$0 >--uYٯB*$[MLPk)-J¨))5l -U&JGp]o9qNܳu RwZEh%%| A:A6X%NiL5ulѬu]黳]J`d&x _"_zn|s4,B=*.}N~uZu"1A+qռK. o}6H]h@ E4I(|cx'wkTe3RZwAB&T!ZGM ɺ`R1(#:cbڬ?/홼в֭ y"$SUY,(F}s_\&P"A̚ jje/ jupR""⯾NMCk%P/C(ӇW(jUDՈp0OZ5A Y+VMP3죊4'kM%e #!Phaߜ2!N4*h-(ltB%IF)0v(c(2,D*`lh(|nR[Qe765UqNƸ[>_?oZJ7uMqZ{}}lQQD^N1>q}0+qLPL2/Y.".3齵RܺpH=VgР,& WRZw: X2Tvsu9}Ug=fϽE7:!FjdȰ>c,Ҁ%s n![?=}eFB_sVNR9Ft$s8xW#_fz^}a44?}8 ϧwAM.k(sz4U EiIF2 !"Y5"0JF+D`4$zahppi OcrP WiI |O9>NI TObŒ͸g&vBwؖzX8۲.JCt[M穁jF)`qgQ9 jZVZܦFp FҮHJ߽*‹2pi)lRo >K@ҥֵ8A#z>,RYkTp,#0bi V@FAxQFZSݺ+`>\>z|tM${C|Db(:ҖYZ(s* , [h QȻő 0'!c@a4V(JU6SѮzϜV#Y傩}yUi~Qڜ͆gYog+YJ}cD{ @Q^#q|m]e͠x+֚ aIHXx8P7T!k1H{*a@U$8Ab쪷D |3%.6Y閇¾eAR /P[?j<nUWޟ$L:Nn8Q:b?8 's 3ݣD ME!'y!,7 mj'(/iE4sֱ( \K2(?$ߎ:2gw2}/E!`L 8V#%# kw1f2`|}~un5GfZ14}f$ZA a𴙾?=:5'C{r7I'܍}Ty ϛ*$%΀ `aYSdĸXq@.RGrE#A'@(8|9{lT6F@/< bR Sid1ȩ<BH!S#%)P g:%/ av$WS$QO ;AȌTkz g. #0&\l!_"DžQ+ey~uP"W43j]p(D-Þg"k|3֤]  EXچP4PD ͢rXo)|kݾ HG%dhyB{0tĭ;̉;և07b)(1!W L\28UhTR5'pKqy0x;v^9z2v^yI W+S.3|,(ro1hq.`J 6!Rlp >vdAR奴<(z)#aA>wS>Ѥo](4eBI^2>rb?O7~rbFO :(2Sga0oDߟ_\.wzur"Fş ş{N/sd6ms[8dk'63YgCpq˘uXhø}cnw;&`fpdj*r(+Te vV71 rAP]J{I80)3[cpD=CNc'ɰkz'?Y#vCfcR|(]Y˔lҚlQ@{kIܓ\L/_†O~O~>l%ڃ{(CC.R DTYm +*8⢦C-%}]/wyϥX=R]q\"0].հlTZu2He咱5r@)%I~3*1kJFjY Tf&D>`VfAAO\ʆf:QEk֙R%BD s٪LfE ֞9oZCj~>kxOx Z􏷵6ӳZ'P>x u8i\UIf˴Z;i@QQۏW_: &ݱg(Z]FO8ז:N?g!l>;DܽY#"j8˙hm`ѰLuނ󮸐E'lQz7/u_hϥjqq7.Jq_h:0y}>T+h 9QPDCj{5Ő,l9&b)IQTa1P[['ˁ%oy>IZbK/}rw q;xs~KJi1dAt9Mwz['v*jp2s ʹv sc,#嬳 _v*Ѝi 3ګq_Yw}b/rP;l1Vڣ4M>or܍qYmHg&,-i55f'8W"xRa*oJaJaZ ?f %V@UF'teӎ/ 3Bp$ A9qFw iOuur[>-dz5na;&\^׮V\.9 ?}{+ ޵/N0XtGY80|`j{ѧ^EϠݘiZ{3ԽSLAݑئ*K P]u%*ZqH0HDU&Pwb^NthZg }#}vyH!(>IA"3fOk__"}@V@O$["%["|5u5<"5FlZ>sIBtI?Sjd=<AЇwGǟnɨJBj!!M}޼[}޼WnA)K )9L4OHUM,ƺ;>h>ސy7֢k˙y˙j^*#Kb1m;z`20p] D.B1QΝ=A=wwQuJPnķ}xγx[؀M% &8 6r $ Kl}"Fa .XEx٩/mxiۑ{emХPˎX*\MK|`j*Y{&h5p%;4Vp3 afSq,t bGL%*fm8}1zd##&m ddk)~G5Jx=mb^Au3rxcuq~.ˇ<,=|BŜZ^Y,e"׎@ooZLےr.xCoH/X ߾۟~|}D . Z}:&_{H\l 5J׫q^Y(։>=9}!;C"2IɊp~F2ֺJվG >܍1h&딸ڹ8m16cf.-"7afm"n*ͲQM[&.gzM".tP萬 iA!I(+rs(\vː ȳ4^@h8~J?S"mݨZ&xǰ3F  7p$\IhZWNA^k v, o)Ze:.UG\Y;bylŹ UК%JAؖIGWAE**IHBo3`l"Y'b8l ѸA5z$E.i2=>W{R vZ^ 2]#v_xH{;_gw@79x-lH]}f]IӼ.4էΦg1yG&86NcSOCp֨Ƕ#w1 f%_Lhw8YYW4֟ Ay>R6?`V"32W?VBpTMuO8nCzRrWD~+'גfVڶىˆֲڅ_ 믖' ӐZT:|Ç*4eeg27| Z(LuPnLS%T 8nqXLO߶tyjI\Gtt%UH,!T΢v6MSwzkѢ6%SZ(eD4zc[Кس16혳xwbٻF#W, lygFp_+U)JV$u/1KY%h$*#ȸ22",Oe.`md#s_!s .udN4.'d0x㙒RxY n-D3>C0BIDP(BVaeSmJ}0.ח.Np%6أƚP)cX\̬"Jv.MA ՝ĝO/tQ)O~ywa_lLjq]o 'Lש?t/? 7[MUи*@I(aV_CNw2Y=Ix}vZGS_eM]`/=߂u製jxW91HKUݡPllHʵmVa G9WJavܴmbv"D )OXR{PjlCaQ`Ca3+@:c{@כ2bo!yH)\R_N<289%Lɍ?%NpD ,A3/ ,)q%pb2YٔTҲ:I#?JyJBz;7@xgh eGYA{3H򄃿$C(s"Td}DLrr"iwzJ"+H b;zR,b\ʉ]Kc<^yrYW.ӧIӧ,36 qL8]P,N^CIH) J0sQ 2`pzLr#b48-I޸FO&՛O#V7}>~0gxE}YxW[~2oqikb8B0!qb5b62xeJ{Vǹ/|u`Q"i%fϱf)nن5bq(|j !jy-Up \APS`mIA2[z,hGnuC,b9}?s_o1;#JT/?>`%`QN]rl65eAԔqT{)3; O%d$ލU%luduaia,pؠ0+A p@/0vf3Veɟf?wt|2[p}_'zz>v ձݟxG4(T#vQUpZ |hDޚ8*-B[ %|^\I>'WUlp9-5Ig ~QS=z+*z銀d>b̙\bX dOYF'';&m-ܪA"cY\CeZad:,;5Pqn !S\KL6,G>I(P$xcɆdӝC{M%RG5uVkQK cx4۩Q9<9|D'L%''ˍ&jCC:N6ړLz?բ04jM]qT;SA%HCͦWAݝvt?R@@@l:ۼ=ܺ\7r({W煞/nO֮,*?qL":I9N7gr5v#% ߾|A2Η{`ij溷*% 0f}iU]yX3;%_^/zW;_i)ڡɗJ+$ T/YuRX* (<$!7,D듓j{(XW|:>KӬ{ %4PE_&cxZVMa赍>,΁XlB B]o3rX:Qdw3S 1%AW0iT҅Ҕ^D;NRZ-%Nx^>i+._ 5Z0˴~Z͠%(kwpCAf+ Qt-%QA ) \ aE[mț4ˤTb=6RS䭑RlcT?q(qԔk9G! w`m _q";G'(4xO'CrR6"p?2 f C|7+.4)ZYPx|R=TY?y(-0Fԓ?]\.>O/1&r?!Dj|tq7ӓ>Y,o].<6^*W&ˊ>wmI^lu??98ɲyIq. fZ*QTaكO_vG hX(qNN)n~zR5V=,IA3UzXV:!Jŷܭ{8(HgR5@ao>I U IQb.fb9+jɼێ'%jֿ{Z<fJó9zfX*r)o.@Q ft{5%s*;XPU2OYp g)_^IssW#wg bӫh|?w 5uO]47Qw\\]}k|ˮ?t:xM (|p:TR@W\e[OG*ږ<8lNq0tvk32{OIqs@.Ib;C`> O,P+Z _@>Y(gQϪ2_Yr*vX1KSkSZ"6#B%K*DX*;>2EY s_!8Qc* FrLWzrM1qUB(mIN9$`sq\2!-Ut_KJvxU"Ո ̚I}o==4=iHzH2ﶰ%e(#Cq19x2,NFWŵ^\2!dg%L90@b-t +RW;D-Xi6{qlQHYgb\tTeSAJ[f̻%xN?3ϯ}?/ɺf2,~.΃;mr}3wFD0Dй_vKTtnW"zv=^-x:1{d|lI>קdlq)$c*=$ -|is$ n%vB Žv|>2z\留 }(m=+n~0I'b]e1p\Cn ?`{(/z*zF\W/zޡ{3  1!!`&DP ji>k/qБȈjd(9HhׅZB5>bЁyTC {Ex0gE#xC8,x\!-=~bsxmn&"04E/ϷNj3ABvv@@S:H=YK] w~X4VPGmlLdHkl!aIz<tCG] Ơ`ffRC {ޱKygk^ mN]$1'̪5_qPN區HpϛF&$l-Cԍ!- 9p mgʟ$ 4xБC Z{^~w(M%ZRhyʑzv~bf)kn~z*180ec|Izk7@eVZ"bv4 #"@DkIFm;ލ%.uIuo2_Og⏼1H%SgfP"IpPbN-s#y]HoRRk,J0 . N]!+VL o AR.N 1$ZmxN,x !gbWqBgK3dguv@SC߾^ ŻvOB֒AYWB k,Sw[Zr fӮ$Ɍ ѹXҍz$}%8/EP,z?XA Qix1̏d8މ=E9a( \a-7,o3p!Dgz9_)ev0[RyŸ=;6B-Ւ@Rj7 $JRTm%def|qgdwAlt7}F/hŇd1Jeas֌jk.krQDy &i d.BLsdLj|AiD;5֚kuiU6e1v1cv &vgqcwihhcrW*Of2 kX(J{ C*>ϟus!T&Ff-kwR<T%l B1;jn1CEzlܮl[@h1N>6(VmՄ k w Ɇzў}n7B1^3zm! !g$Ss/IGqĝ^L@Z4ƾJ"jK $Gy5b\U0>:0z]^JhܲQϭFu[k4MaXm$˚NrAD0AEm\ηXb(ۼUA(q,\^fc=Jeɵeck1E cbc,h  ;mc'cfW#ES\;Rco]WI3{=pZ͍KS?Gntwt@m~s=O!\0q{wͪ91r8X-cUr5zݡ }1_),NmXx:kanl R1Nfė#"`3|whHSX:Ȭ`"VZ3O=>?Ɇ /*%4kiZ.4Ua֚n׺{Z;KkV$ϼgA\O[!% 2:b?ߟ>*>S1-seu+#g7U$W}tȪrP-ڟLB}YBL5^ߴ>*mJ[VhtrV i=cKH8 \>fQJoi7*ܮfBގ!Vx 1j<5Azˌғ hcƝ5%-;lbiͺՍԦvi#w4*18n?@"!U~?_W?~UpON&/ټ~y)SX~EXZa7+2|*$nzp2I nv~m,jDd5(]0e&K"&aR!1əw'7(!sg[ۛ o9yf^Z"h(k)eIhy!LƘ4Rk[ =dݭ”yfM O;dU6KO꥚>aLa6D2+4lj3ic}уՁ\ü_{^iƮ [JR͜ K -ϻ[GSG0z0Nhks٦Nw܉6V .FfT-5jKs-Qr22aVХxJ+ѵ1] 8krFC'^!l4R藴sm/tYTWѤN!mWeDrS6wP磽̗-Z51v(ͷ,i#d(VeڣPQD.ǘ_3KC;5MfMy 0N6E(6gCO m$PZI#:'jȎH᤬$d\_hWAgjY4RZ9,7rUC3V}yT7G(ܒo),Gc5 bYy0l̊e z] Ů'@p{čyBq .F }}Sޙ]n2+& cKx ,Bnũ7QQRh9TG X.%;&)>5sb 3N,k ˬD>ɸ4}VVm9hXX>:PfzBAa)1[aߖ7]\4L-.TC?GGkgXrfz5aÛ-o=iuŶ5E}d5p3^BbM4ˮoҔErDeHʀhd3c#EQ0mD9_:/9d6N|Idh |(5CQsPHjup=qhDNe`%z'{W.k9xQOAB!s$ɻ,v {oyیEϟ2ܩx/sҕVߺɕ'?g7, qȪ߸g'ݾ}k&?NMo&GIQiR|ܤ a:9#5Gr$0dcR$S"{*ȮÂtJC)a}a@qbVi_o<cV4 ysVBn$Y 7&V6;*fGT-W-r I}09ιF& )ȎE2&)LّґuNu{ Y9Cpfheڠ"vJ ~Tb-,#p/d伤DSad*N)h A{mAuNu2龃e|mPJKxim(hR~(x_!u䵚ݸ ?]6Ӣ :XhknQ$JL\V0 TBL%s{mry@kD@W{绽irdKv4&w/Sl1mA{eY!- < oB$ҵB eȃ)J)GM\Nb$>($fh$aJcD" vD3C_ 57Os9Zԍ\N×ԅ߮ hyY^=t/jG!4yO;[BKyqpӒܫήI?Տ3ō8/@$/7~eW=Bt7/NoUL')u #VZ)Zpx>G%O)2_FJ|fwi}h ^A2Bڮz~ϒG+翧H!5 i@ZLu -c"{ t"T+.iH *.dmPHaCg4\xCUϤᴵ"Jì58>멭bDNI/颫?iŲ*[)z֊r~h"i"kLκ-̀Y|9ċ[ʮz&"KH^0lH',׼gFYi!Oș 9h*aĴ&k9W}[ڮx(ZYف338ɷ6`-` k{WzƠ6܊@p\hVL(Wa8qmt$Rx1H#x3$0mtv_jY-ʣ )XZ܃O.S~ .}:;?kOU8-|{:Xt6=\'J$ \LSuv1?GW\}FCrNqr'_8q'i/|2Zqc܍Y|AU?χ7XR">"y4I}m!@P ?ZM6_lO`ivZ/&bK 5ǥuT'<DBՎ+P/a~Pgzd`ɳk 3YLe 4 wtFKgp@ɼBufe"~:zB7f=58߆` "5${譝!}w9\]YU@:ORy*$ jK UiheT4HK}&)A|rA~kSVo3162J20cj8wyJ7̨A`^)v It=@AQ@+XC{eM LK4'Ɨ8;kCdI3t6 ]!..^-^Gv:ߜ 8Y]ɗޛa61E&=RJ^x\@pB,D#H x$;$b(AUy ܏ 2DF.;:^_u*`ImU$Tc-vMq3 O{Wk} 78ʲݏZL$W)(YfPw h]X٢Uϗ=۬/EG޻7z]C θg+J_N\ إfsOD[bm K`_^EwEqU [ @>E9i 糝[0V/g|?7`i6k<`ʮMEr >O&(B5$mAH+l!jD*R)P)=7%17_|=F̆w#)Y{V2]8W%Qz(eр)92 [u fӡOctӛK&:6eh:hY^3 F0gџ5qnf ƽ4+ /ĵtK)u6Y=P7ΖʚoEG{7liPM(cƾ{?<,-s ~' Bz>G"Hֽ s:p)_{YS Ci E&pdBi'1Vx9mI$ Jm>L1X7'۝Sl騾ۻ5c5$9P[ w\L ?Va9rr?#7XU0ٌJ6i\]~:/~O')H:˪#sc,,I7my,x&ߓڽ*<: >hjC #Vwnx+Tj(+mO|eY|cL'Z77ս\Z=:Vo>bg^72 ^oV7J.)`вz-A-jn7M${u|ѿ*Pe4Z8hkM+CґX0P+U?x7@:fjtDVxBP AG7NW~-4 (8HgKyMr=sUqMo, 22 JR(ˊK.щk‡{n#bIJ-pb5.nJEt=jUp)ESB&u'y%$&3ɮ@&BwY)9FϸRD FӬJ#+EK />劳N+CF@?ۃ6+ MR~B43(1?v =C_U{mJբ=] o93En:(K-Վ~{L_k/lAŻ+L^p0Kn|=D1H dRpFx'?/zq|.3J`C-%UV4D j^^AinC􀗸 D rf:i+ey#cD ~<^" к-K.S4F%\tx,Pgmx;~%0<+hK:'B_烓<`]+Rl[lF+zohUz>8<kԁ!;Hcw|ZBS:пy;;!F:ڳz1)MF%37$ALu HؕZ#z~B↻mc;I$qώRM`QhG(&\z ݘ29X_獗<9/<ѽ}"[wLTln #>w)pW^஼]y* b;NJ:+:*6 BHw\.&fRg%bd@\9S"@ìw aqGU_t},bwSeoa<{g4v0'/D=8-?.rX ,ٛ{?\.NUϋ "NQ8pUnñv597QcZ/nn-má2$>~7d@ W(yw'3הͲ!N]Znn"Hr?}~ۃ pw7^WK4͛HJڋFf6p<]?:`!TXB!(c݋& p?ލ0֮ᴿos9\V`_i\bkLBY PTZLuۑ)T%)4N61Q iMScclXkQg͗=5l&9 3`W0^Rs7|ƌHUEB ^*I1}FkKT_H㴖R'K"*;]n2sw^'wYz.c}^ UuJh*y笝"ἥij*>*5x*z]Nsp`H9x1BNjTſpLũ<@R|0&9urւpFcJC0IS( 6!T2+M9<%qmY$g-" ۧhRf*phB+&B/'DDH8L%+Xb0)M)w$v"~6$ U(l1i?8mek3[d:F}m2ҢWgOf!p>tp{ XEғ2sjS/4vcUB>hk.=)'6zbΞwu+v^̷nr{LR2pr"@4zgyG}gRgU ;wTy*^POA  3xw@14[gs,Di#wxū/qញ^EϿ&?|f~ PwLF+ٗg#HS_nx+Tj(挊ݴ1]㧮U n >C/k_LL[.VVQģE \O`l#pSyvO0gaN4z8!BY8J҂rNRaDYTE@8X6Y-] $dwu4GZBCe)%-)Q+<2ª-gz_rvm=3lv_0lXr$ynA)$%,ݒ'BfX,UEI7ΐ&xXu`ftC`eOרRUm*Ef|cTJ 檣s7\|TDhf5"^u)ȅao?߹G4'2Erwm.UQR-[w>.Fwoف1݂V K\}pމ/z\b)8G-J^H<~)O9 G~fJۉ "A0\[ADNJ5d~U t'-]<^\Ji0tgCfpH V̼s e әAMq1yE`CpNBWށAHI]#{`}ǙCTRsPP< h.Naa $Zb~2 $+5jzګuYM $f9BHɄԄ[Ya5JC1{&s$i.E%ľ`0.K p^ ^!@nn24վ)gd Ji{?/˝~|ƙK|^\PI?G:f.t*9E|T KWS%% 7|qـKߎ "e&9& K u3D3eR"B"`ǰBZ3-ĄXphTuR]yt肷5}t=ш̓Q{QB>ęx91@T0GGK6ar@T R䈖jSQ`>IV;Ҋ0~}Uiezz :~q 7/{5OW52S~4Oy~dr1Fa/zF1 @b-GW4aUx7DYbJU_d" rZ拓)Bi{L ̸tgO,#æedpυv/^P|w-4CÍvp|YR䯪k<ȹɹ^s=p$>XBwcʱX7[@E_:M `엺DnRq󙅖f1컧f-RR0 .@ä.uR-ũjՆ ,3TELrG 6Qqj^vqR/{7_dnfv0/i5_y?^ W|\ x6q#g c!")izObbړ?FwS4/Q:$֍r=hb1QwԱnGS;n[Wǹe*U=u DubQǺq ٙu1hݚАnT`9IhiTm<|M:ß`Eow~p[sc?0aIuȾkmUF0V[^/thVD+wcgDpߛLoq|@@{Ћǣt|?Hwo |gx FqR[7,OB,acUmW2o$d-V)ew܅ J>:%R8H)@I6-T{ش\RdHi J,tqy6)% Ep2`K,"D}rd!QB6e"ɧW-L[RܹGD7] 6m\c>["Fh)x%#6EJ pɴB-ȕR`uu]hKiܭW|%3oLf/膜I &9Lˌ'RL,UyʓrÛuְ*x4T۠92Bݑ- C!+ǀ[|7 x%G?<;xiaUx|p_`(23+a4WB%>gdJ8˰Q&<1!o "y[h0:0f9|J&IdNe2&#JPiEiKą.֞8ΎG_߇4vqҟ51T)"}L_&xbՍ~ ;^NDo$mElĺoOBX)zK˧Oj}j޶t;d.VL$SIR1-o>5tpf0᥾#E>Vެ>* @Q]4EfHe"WԃOG~MoGso]|⸷U_S"f[RK©@Uuo^ /zg+g; Un^Ck y4ڝP^eӘ}cKsّσZ{0;ZJ2;cCJ ڽԌXriڇ1`d:WXp?B|~aIڍ3fwj^SvWx>l3!=}Dk"3+{.=OFM$ͅ_j&0j{u/Q+ 50tՑy$ huYpM`fK̥ZX(LDTbp( yK%eu<8B-k2&քOA~h9`:5lFt>>SO$} _>A(l֩f/Bsy#k|Ўw|h¯3 GU/x1}u cڦBMmNq VT[;g{^A!獋 oy@w)I[ -l<0&Ἲ?5ςbM'nZD0M["lAXC:λLl5'.>ŊV}!ddž3K~Aҝly.t*9>#6%Ǹ|yӭE ס'37={\: +gq۝`6w񟰦B_ұE\<^kxC!t4IdF-Ź#gyq=!v<<aY2BJk5e,pz yȣ/\].b^gym:ʢ_~^Kʴ>fp Q9ûD-~ooRE;,!ɓ}uqrk77Y fdHB`sB}FgܤT X4MS0h*FF Ɂ\O[/$/;>HPCs&# )2 >Id3,80׈VQrL4fƩ &`ldrEz*9ͭHaRBP,S}:!Q!@TrL26jQvWY#?.MN"53a0YO4)sg&V8)y*y$ "/!4H*yqLA 5#4F".3dZZ 8|D.׾rMZ\ĹKztoUL H~ͪK%á0\^}/=0+nꊯ5u =JiL8m(’4a"2#J,7A!B!`XPMp93e3Q"A4 L:).;'2s$eH` IXUW X!ؕY7RG~7s#XK$S)CV j-0+-9e|rR QΦymGZ:U3 p\3kL$&Q1,#vHϖ Za$Ka ~Ḿ aE?âQ'N'WH!H 2n~Gk4j993C5ۭVCHLyF ڪ6HSuy;PS{ DGSQs޺iF&V= d-SHRyTa*y+e-! Me}[![&)8(X̲ۘ HJ(ݩn 6)Knx9H\QP)a[&`x]x`m"^QⲩK"C$_'no}GE6LF`_ƫu(< xޞrڌ[= ꧫ j6~muq(q[dU.,0B7)>(D{/ビ̸]rkeʞGg-ߴ"_(Sȳb_*0ua~{o9@9F E/qon{O\x?_1u̫{$OI2d?{=/pA[\Tc]WlȞh__5V'splc4JtFdӏLЪ~eKa߹X22M܀) nIh3aVHʯʝ^n#oK  j3ރ%1-PcaKR0"Xš#p|PfKtL@Y\ 1X4( Facx(A, 2Oan^E]|iyRBU֕ Fs:—><.Ïf:ޠޅu+^htP&U_$R\F qʸ_nH !t&;ޱi)^r_#R~>V[ʚF 2+O\W'kruIQhMKZ@ȇ|-uL +q`P=~ ְ˙ɧꜿ&a|zˍǖT-WU=jl{[DB,@E3?<G'F޶$I8D qbDn;y~sE8}/-7'YM[3~hX:C[h81 ܐNWzsWzlzn@u>&s\.hH$!Hd Yш v#[!R`lb~'OwfJ 5"ўQ[Z<8B4Uv},*.*0ZAαjq>|[K9~AI+q8 _>C )A^N8X)r7$W3rx=KXk~UuTZo:T4|!4{a4gW v寵R$`3pe%.=mqo+ٓM{#qgŌz鬴TR i\ԔeAE/k4[7I4VVR8zCgL^]M\Y-iv0zw;k,kw{?`]Eږ){cwm"H/X J _,PjPz:u)*UQ0fw;w"}(Njlо]d>T떓GE#8*J&@9xi "A!ǃx!bqo4?,8μs4Ҡxc&,mv iՠ, ziq'ͦ\":eSN]>MrnETQ$KK syVkl<_߽CcKNoM`1 7"iiWJYq/W! ! K f[xdSX B؏KnvP@X@r 14`B)>3`ˢt8ΚRY0N >u)nc|>֘39,UG tP^hIěwNmrˏ;G2D;cj~3dhuAb3rBGgUf}yWVeFYBQa{>N-}tOZjN-vX|xtQxjټN^zeMu)GvxV|I`8zFض8]ؙ}P5lӏ.u~S';3jP%P#F]&JƥF 5hBj/us4,ܼX\x4#srJ%vJ'ӄ߼~HUݴU%] D55B5̿:7!&ה5罅Lw'a~y+\i 8TGiMUpVgJՋb P} !L0YFeZsv9sV>y]B&xL%E+ꤩPKQu` 7k4?b~:8"#H>#No@C_e*DM]܀j?+'5$t(1' L6Wƣ!TæwOwQݡ?REv #(/ʳcVm]fwh N[F:Ōr&Qb:PjզOo}k>"%pV{/>6pLvPՀ&)y.m4YE܍ǥ쌈]b%-4E.yJ;\Zr,:֗B> `TĮ1*kXj@5 cw `TYf*$boB: * c<#Te W@]af{/!.[h) 齞Rjdѵb:NpT_L4eHL|P6gMOCgj&*^ e)GEf! &"壎E̬j}0$eH)ͅGu-濖r"˝ 9F4b*r<:Sdž G`V*"x],c+#)0L 4ʱ V,\*hjI,3pGkcq`X0]ڄQ=3"c̹WW)tX:,yʑ vS b ^2˼"̤sFIi$'Xq~\9Z h%']zru>-]Sr;[lYwힿ/mixJ}x+2DQ*RW)Ts؞>ax`7w"HJ}U/'\ߏFmM"Qu՟4M(gH tl9|Ƃҵa6sL\n~h,% _ RDY<Xc$hKͼLhʯn.'ݬA} 2m赮5) R3 (k@HI9яk݌*o]e$"NAR2^󛓬 [ㅑzurɷdl?HXJ'A_{p*~9lmu %ܘV~!짏\6bs~D;xH28ULMh4:noRJV [_HlM8Zx9q5eDK6Z%׈pi"J8T%" :cyׅ6L4[:`Ceu{,tFetΉK' \`C.p5 KUo>+j&$9RZelᶻ_VUkp(kajɔG_4][pqYGQܧ%,9IF_^(bkyUIjˀo%kh՞HLGC^@>\Hu$I= oE&\4c& 1`PO""DIMg0ms,RY 29Ul *_,}Ml(#m;/z-pOyWU:0tZ(.4xH'4T\ ~C:͓+=o~!I4h^'Ѽf7O R:Bt<R<3Y!Q6+63Fl|Ӂ~4S(i3.G2U)7\lTD+z]R擗*v=n ֽs˻Qބ\{a r!Q5&ہ+ff2޾ oyw0)p\ *|sg^>?vP^ ?9AG`Mo[Ͷ1uYVa'|ۿ}tO[p>p%Q( W^MPO\n}a'}R=?]Ԉ9ϜN7u?3zSmVÜcZ.6>$ AFL E1cC5,)<"C"T-CfX xGհU`SV^T ɋ} 0yYg|;+$8rMac yd^ IP]lAD+Z%*p{co'qe;uFffwVbG5cbG`4>"IMY{yg:~߀8'h%yf;ZŅr%,gD`q묝 pkw˸dZ= SZ`"H*D2療a$0ԁ  W ;Ft|q疧"\}+, 6FT绤gy."t7 ݱ>"7&񴡻Yl zİg)Lb:\OX3Ws }kAQzgSu{յ0ь3i(!N1_0j}U'pպ%RvÝb2cئ.~&s"%E4%  Xdt1>;%(Y$SdQ bHydLP$5\NCEVaRJTcn0Qf%TJm.4K;"Lpa&1W4H~ J ~ rU5b1&X-E0S?x?֚b߭Hm~+-a(;Q:Rn5Qr!1[oHՀrPF{U^GЩR .BX2VB6Jn{Jw2Si9{4YWǀa d6cH RmLl.3N%\b22٨x*<hLn;P'`Z#C=veG^; ,GRB ]%B(KM^~hlFM/ڌnNAF)u9ö`NM gwwemI G,:.EemxBk<Fm2HJ'7F(x hTg} ?x;ߠ@Xf')=!n}& h;5nSajPքS:ACV4vgyu+W2h:ݧh) glLc:GcAI~R Z`'d'iEprL岻lA[ %lˆ6W]͛4ѝCo1,zLK'qӸ9 ?XQ6̳If_h5]bi5_2 oߔ "&tha &DG0S>ANaGLn6Wt$\Y˳,gY.rt[@# τ&, B9  S"tїEn j݅9eGdUrݍ7VXNDf dDgl`xoL.xED pKQ1dwJx $e' D~NjC";l m Z"u-ui&9Qmr6t/wtׇ !h0Niiz<տNγB_l@`FGn@pCC {CPRa"q@H2BzE\%+~N腕\)zWʬwJzQ:nvt>EGݓR 0L˸:^Om͛)P3e[vaC39 zZ /Aa>*szT&M_)eq~$EsS2pz|Xy$|ωKH~UQ\n\}Wh<9=YPlX[uيlw"۪ކLtO?%v;o ؆X [/bhps|+\΍Sdg\vd)YY?N )IRo?Fm6iN4A/̏shc+ko}Q&x;O`Zx^F/[„| />. ǀ&;,MwiWo/xkՐZ2۲yh-| GCG_  Kn*ރ̆=11\h-oA[[J5@c4^/aeQH޷f&[0;Qs#(Ec0_|z;Dr/*^^^ܶ#2g`uA_އxv'S4tǩ2K\rI4[DuxN] CO߂|NTkFs7ˍHi:J$&zJ?73Ӭz/Ӑ*j[yDO3Xj"y$5^cåHS%coTGW-ё\@\Q)P8j?-D_j}WKP+T]9?3ƪ=ː+7M}7*3f'+/k3z눔*py˗[|T ɿf"QqiU SF?OFABo6_ۇK3qzuqANJ‰kÔj$=ZCeFhl!W1EKW3صXZ˖K-}%/ue44$$*߃ 7\+L?ڐs58׀?-Vr 1?&u-AltKОyP ͠1 hn;dߴ엔@C`t[:6$İ 7CO;QJ3)`JKs- @4p-QEd֢@Hڇv* R+aZ]ʚWGo7Frol5EoW#[AC9 -qcj$hI'j ^r,3` MM;hHٛR:?[6dx$֑V ^.S/ "yޮJך5ׅ<jF:ek55~Qu528}(6Q#0LH/0jȺQCeų /HԙtlA㼟ܓ\!01\.ݺ]ҍ )to@( tӣaJZvDo:wI{VڢB/_52FQᫌ!a匑Z˙_岐{ vtq#؈5N8 Q\mT/ZC|n !#.%d!@-Z2Uz\@NLcbCز,<0Ln#eu_=53`!l<¥:(BůK*'EfLEfm*yQTݥnk.|&H؀TJs07ݟ9ֿ^JNO[ޱ؏Kƨme6rJtousqKyeQ̵qZ.{[\7կcFcYuoIh ߣ~wzԑ|"t[1OMR0Z\{}'./gof#E'pKmՅʹOd<T:(PMO79aBeFNXz|wӅ?ػA 88a{W6}*lAZ7Xzhavly.ד>rVQĭ rH)up4:\z/p b}e0L4aa-w3IH]^2&dTg'Eh8ՂQcQœt21VWK3(n=\q4n>BݝjJ"@[:QBd6 |,rւK:)0g Ըi# YIAM sHpse \AV+{mtvy .q4g]:l9z/ Ui~i&?uLB3ƿ/Y>8=k.DDUBĝ D,0d jՊZh F(} y#hH U H3";ˍfg,S*-ڬJ*xf߽6Ss z$[mtُo=ѫ U€c 0< -bKsUqj7h\Oq^2n" &e&e&e&ՙX7-m|Iq' }T.ACbTj|Gz8G "7Վh@&F?_K&SQA/ik{L MC!Vn"Rv$i=G^ަ"<(k -4&m/UW+_,ͶnN49:Fs%m d"Iq> sEĠï5 $hqB"VjYp0k #'J| rv7N" a̺ӣE6 |2e8Gt~m5ķ~qz{njB3K9o3"MK{{yn-3CI* 2AmŤwc.`0ȒGDS1P{in!iEʰ"$+6T5!OvWY rRQoD HaM(}Z7<kvTh,x l7G԰vX OO(ͷZszY9*=@EUȎq1Ŝܡ9_#N!;=OIGL KG] Ff7[w/sKl%X~B(m.PwSJ- wNs-r#l?T:0.ٚMz6TTfhfCY;Tr:>ڻߜ~ w:iLw_"P@'tJ% E`k7Fz2=–;o-0V5 G+@VKhBW ߂$(S% k y2&ڀR~o1אKVl6j?FU#N05ǔV}jU`L SL>|Ni[&3K놭i)*6մ^S]zE3hAVg -gA+#*{k¬%7S eDe"+#j-éÕYˈfOdtUFTќmHm,}4j)kn\5, id j /WF RP:hFYq)Hd\;D%X>Hx;ںQ 0=`n9x*[\8^c׵SEo8ChA4le숤 ez#ݕvHpv]tsnLmNa7D=-ц[=5VCmZk6E=ԖO0) ۢbC\CP${ꄷ TO:7@@#;`ZWUt!3KC_W=7P vF #@hFx&$$6ifiLHWj !mBH+剨]T "ͫO(-=*qUʃ>WN 郎ƺ2"Vb;/3h$ZRKnߗґ*L!u`V8F|Ny!0nKvdMWj-G&F˰(_C7A|G7Giƀ~N +kWnB^%mGqgF%\@</וOJ4]zxso- x{%n^\zy\&mUo~~R54[G'lʕd+ |i(W.@hNEQ\nV ֭ۜYSNm6?ɴp?u+%4ۺ !߹yKgn,[/f#XP!Œ>XZ!V?LNr sF"bJw6İw:NCHZ.Մ? DΟk )f%CDZcI/4! j5o$lE2.>H%d(15.ie6\89M0Dٚ(}g[2ad$.e,fvwŴS3?,pa=-dmp}TG$O)!B,|E8=k}*[ gڴ~jwܗU^3Ĥ=w|zӛ˔u:h>PA%%m9ۥb 0<=UזL8pbOß<vi׻|R28jDex-+!TSՖw<@e |,eK,D,¢هFzɈPpoeeK&q d!u:(P ј]K[.y2쨵IV7-'ݮcGKe_&B7uz#oE|IG'}t}[GO/4dtQy1SHBW.  ֵvּ6J=QR b蜪ϡK_oU'uįׄ6/Zaj)*˕SQ*39QQ;{tX08eR.1L3#/9A E^! C'=K9O E^}h:cڤr5 )uTc]R#6^풗-6e}ڍ9>A‚1d& 2'##G ӯ'^dåjR~n Mݐ֞;a@P5@w󩷇֒P'W^tt$o&`dT WyKB@ㅕuSݼ DET듷Fԑ{97׻ua^ND50:bdC)dYCo>x&IZаZxH&XǸ̲[&Zx|cA \bIҀsS#9Uqtj)u'76>ԟo[=gTɢ a -MjjiĆF(+];Uuu\c ;bՖw<ܻ2u%=堳T]A4)'gϪ9Γ>f0aO'ϹX=-quH|RFaN)ax8L'!2Z|;d7x+7s:q# G5kEBUKt][e|0Qi`7Bm 8B@kuc(3ы-HFgՁG52MMنQCu 0!\"yrfǮ͘Nvu1U-- yKՌ86QY+|TG 82FѪvZ .ͭ ;fwcfZ1U- B*'nn@4M3;CtJXAI3pDfigw/5}w;z߻YڝTkqvۓp2j Qmd p|k%F>t$d1#cncRx4";MϸuskkaVңӋG?<7nrԕZ7xg6".b)Щ[`2Qك~@K"h 1;_:Hq S%%3yekO|3~\b%C,}E`0= h3gf.:AO<~ܟÐizc{3PM>fU?VH x,c^>ҭ9+># 94PqR%F|T f|Rz0L:1 h/sW9S0V,V>p0FID6VNu$JyOyJ^{,u÷^H1Q*+߀ӞL!иtM,8)C@S^gdZE#ע#D- 1t/A>T:#LE۬!kxK44 a*ʃ/d:8Z&L)yS+$aDUGIH2>Rt#3N˛ ߘ3ҶћH}2*C]]\}gxG2T;G^uqqS*Y$=%UګIĶs =z' &¦t(dZ@xן6΍x''|vzp1&~M;i+n]tuCUAtޫx(U\KWmyCHb5쉵'iz1"҉ $3Ƨ[YwҡnS @<πVF'ˡL;hz# |{[ zw:d )JH+IV#oj;+{iKYĞkjsŬČ.u6|PJJpZI A=v?D xR: X# pp\ _w|^_fGG֠ @urؓM!?mI9sAP+ ⸀LN::=NaMN&׷. @ɸ(W2[;i)ʃ M*JCQ5{y2 җe͗ m!VVg` $\+n4TN!9J$6_L0[7^UʩYklGw h`$q^9._̜@Ziz-LU3/*Y |-xsȫ%9 b“7hRrF:}[~4ˋSG@Mk Q Nth& 6L8ZrhǘptH':c:FQi qC#$.O^$%k aYsRD4l"^cC2MB jdv>ʑƕ(e?%dYbܔ!3br'I&lWOa>Y,wO7LrsVT]`LE.T=ȭo\n zGۡYYqXUmL|9˗)w=A$7pݜ~8V#+g'`79tmsFQ5ieTk: c?f KG:ajѺil ŴhMȦ 77ǹ3-icq͛gD_؟㖾.I2$\ѱXUqo\YFm:fHݸ_k<~:?/ '!;me^C6 1̡4"ȱl"@`Aggr^?< ~q YD`4se @r04x{NbEd)gal,/0nv|hG!K+s:+yb}=@+0- g`y\ ;z}4sE~;*ۉkF0,pY !$_;>##fi S:-!X9om335S)\mqTr@ ?荅}kҫ:0,;WOqGQS/:5i4+_'u,Hh%86seBciP˭F,TlKEKkME6ԪNG30뻵M;nXnP;j<7t{H.n͙vǢ%>Ŋ@3Mn:rZh޸@~F&!iA~GcpC4 )|t~::d7B$JW5 )n+Sp*yA U\퓼Ѻ_AR)28ոb>8x=\\HHx_GZ[d@f<Εq/0v$zD=D] \SdzLZB(4:]H[ĵtd,oGe}uh򻹶23M쎏;67+=7{.b .uqұU@6vG"> d1hu^kR Ijlߜ_פ4f;lNǓP4D+?\^ANJ㎈ٟfʜT9ݖ}tg!YyoCx q:CV/s4=Ҩ hh^h^GOg%tG4sF9_]ȍX&KWW] y6_ ?Κu}gsh6.,&udZOo:z#&F|c-U+O崂LF^m@,mm+stsvZpV\w XSص-ח8IptH̰qz?:}pM:G?43ZOrDZkI#ÜIkWbʋ#4SOCvZ|Fk"x7GB8` $3Lv:F]ocGGsC!mKZj+ƦnzF'PuF8;4ͪ[_Ά|cPmtegp1(wlWyVkІS6 >п̹7KX&Eۼb#ҨZgw{I |XaMXgEᢜV@+mݏeN6Qz᣾E)8W:*$IͱŒM![ n롡yRj75!?F77>x!VOߟҠK fF 6JGWѭ݂mw~Y>Ђwa ?ԠŶ>sF)g XRS5 y+H9NØ G+̤4*5WlIΒhz^ߎ#0~vuG9unD]cAQB*(=VG= Wg󪉸 jHd%5QLw{UVG,WyIˆ^dM̘ڀbD]jÎ_`VH >cnkA1m 9X\s,nAD%0d%B1(vKf%8A~V|P33cGT:2G`;;ꄸw p4oIB#iCy9=6'gd.7\ikZpygo&?N.5 0}T@*o.U|K|k@v1@ӚSClNi$WNٸ*pW)|P)8IE rd!*yȒuIpJTe2NZ>6"+WVA_ N;^27=!D}<( o;<E(!c UvJ@uQ |2wҿ{l0SBmɻ[Jbl6+M< ]sLD'LQdc 䛹"l1Ur ʚ­ ~0wZa{>#!%9ew.B:+2@ww]Iy鍁UxօD+rȤfSr*H G4ZmwpRbWtn,j)F# (GIg .N'1)R;~;4&iI3~956= 9cy%׿YFnd?/tJgH }N:l;UI7DFlM]iEk-*y>Q*$7qXbS*S MCnd:t,i$8t Йl4nZ誛iPꦅB )Y&5LBA.IEL4 JGn7|xP>5[EZzM7%sXcߐt %=bk.NII]Du}EYge j{uMʭ&%{Zk.q![ ߮bNsf:ϙu3{΃j*}gZ{L_0ڳ_$L0;ZM$~_0 o`pk& 6ﵞzpΉ\$| M#؅$X$RGq '8[&:")t&9c#ԃ^Vq4ΰy م+v+?Y3f > VfoR`]MK% }%`V͙WV\g`<ƣ:TqxߏQȣU[9t*u^_?NYCWI J8Y0)TpY].o >5؀4'+xV$L(io>["ԁ Ah;Xg> hx/O[G\ԈK_$o}kmu{K/οm}oJb/ww~x_mvNbu[`lW8"Oo_'^퟿lvpOZsokϷ+syCya<Ϊ# j?l,_;7"Cw*5g[|/ `Yk;{vpzI.~{⪳Fujvzxy P_—ZjTs^kKzfu8h4a;_c8UVuk?hm0bZ*VÔ{{R pLWl{?/G$Y_fY˭W=S*5Ll|'V (?bs;~.Q{'^gU|<nβ ۉ/Οcx}P^?: ~o|{xн^3_cΥݗǕ[}h;ȫOGoݡ/Yp CUt\`mg8k>Tյ~uOajW?$@Ivxnoo0dGڽC/4̣5!GZyJ-?<9NW[ǿ?|EFFr2agn7am:='1x:lӻՀ! |ze YX ]MnҬm1f{ x_-V xaN'1ۍ1|2{V5.λ/=<|mnbﶖ._ ӭ:irT𚣂73Ħ>^Ǽ2Bc(}<;ZWXZ2|$Ma ÊRuמM,}\_&DRx*W6 d}*4VWum7pJJL8į%-x<T2>gzr>VNfRfrV)}nGfzgR6-"e`@A(p*p{Zt#b,^"bkĕ7L~'DnҐ) &lI ^q3@-0˓30BSKLɗ4Jy$a E:ݏ``gdp%n^p muI}aqVÈiy$ImȹT\}pI"|MܬH6jI֨χ_ -ԶPBm ]Tj[=RYlG! ?Zj;W SuM"SR;Ԅ4쯸P'Q5QMIՑ[}\soMc==_S' 5&9Tq J3A$1Qq֑굠N!nG/Bt -DIejAӃO`*J%0+!@lv:!.qG;ـ,y@uĠf@!q"P(nKm(6*-yK]t2* %/R(|ZHl}rU p6Jp *pqIej4v009BxiƏ֡33W $՗sǸ:>U<'8{/r0}% \>R-ܣ|5PM$}-'.Z /8ywK~5 ޤpf;+E/:2EClЌr\ȜHr{ lޞ!-"@֤ފdS\feI.c:-.2+.UesM'S [e8:8 nzqFbVbaFGp'kf}wcC_Ρ5Mk>7nI+B4%8ϥJ)b4Rj6挠U뫁WL࿕ɮ)Z`L܉J%)eI` ');[utA J(Gx=B2I<6}2_h᱅[xlKl7nLÞh4d%v60B5 AZEKͥdrv2 暑$&)$<gZ{:_avw&G=x&c#|X ~Ɗe#ɞAVS%&Dyt9Vi#^ K7LȰtg/#jRPP Mrqh( j8Q"i3xpHćlZl3žc%u, _;žcX;c;}1QhpvRm߻H5qG/W+Fbeb~xH0QHLX#%`v!ѿ7v*AZ2ы#J3" LT_ +t!]_EK݉.܍+Bp/ʕ  |Bp/( ΘtG LAD@1 [34nG[? 5n^o$2 Os+ Aw.r~JŚOqwCQY22x mZO,S,>DCe\2 p,\ RW[+DZ~kw+݊/[q׋}b߭w+݊}=v+NPB< ^&7ګp5TW -:%סF Z *V ~u" 1`§ߩS#ۈ9!Zռc, c.M\i1lX굙Ki ZFՕȻES1%櫷~H8^e.ɑ~hJNZRz zR~O/_8wgjzڔ/T?2[R͐0|=SޏS"}[KѹE+:.h*4 uz\Iք@sM~Ax55=~[;w2?[efdw˛WHAxj~h`p 01e? Rމk} Vzoh2^?A!xϫM['w:}DA F-pR/{;}sWtޡVE^}oTs P ?;)v)mntmcIyp뀪$AUpyO=G`]8]|9$1ԅ/ %:qL=K?c9%[@jg3-Z3dJ4BIZ\%$ޅj0q|>G?DAӊ:9u~сN`@i eh*E<|Ro}t^?|yt|; z]\ƓwC>,O/NVAMVl:>m[5<SH,̳ޘj6zgǯ,M)ïO'WosV`)!VO_ġ?M+QAx-rFbruE뒯@a9[{Z谓V߷׷YyQZw woR Yc3ɡV Dq44lK:CqhcVPR=Ĥ "P4" \d}I8C7խ([͚.&[AΦm]33336XBM nO4;@Xg2<{Y^xZ1O C"I1En\e!S(bh"j7:Jnčj*Zy^`A/?IvQQU񉦊s >(/Χ=Nt)~g܃ ;&ׯ?p~2GWZ_ ž=w茣?_z?}Xek >R#nB;" xgbÉUBXIœ]EOZcg#? F1/NOunx=;h`~+5W2Ȧ ɲ֟Fx̃`C9f|&Y02$%M\Lŧơyv1huilY67Jh7?IK%g#B?Z7JiJT[vA/! /`i:/iH`{3qp WiBi)#{x]ִꔻ輌UW(!dNX6c:^(_٭ RpuͣBM3byk#s lt^@/4LWƚjMSd%(9a}Cҋ7#Ka@_JF2;/tF&ZHiRfW|R<@3yHš^bPMQ yBsE!r"$(p"֪ZrdqFE$uDzO"e| Œf:ged =x-e nn<R@B6Ulǖ@Rߟ @%&bIzd@%jx jP)NuqT r+-L''F&BVA%5`٩feT^wl7HE[nbwGr8-A/^N?y &K#ϧiID$0@ At #joWʙeHL v5t ,Anйr膎K'}͝W0}`-oHѠ$@duqȠaA+l1UZc|Ðh)+aqS UnZUOﺶJcozu -JnvQ{ʨ|ݱ1Y9(-n@Vco+9A7d܄V0JԐA^9XCFŒ@WAJXQyޱAR#R ,ZLV²4'KT ƔFpyeMGFT:x7-隓5Ֆz[{1a`IXj-dAq~K!|B':5$R9 ]Q3%h͔MrN֑5sUaRmε~< F*upkx{ kTm\_ނw/!5AоָDR=g(Q3"k90IYJA]X:q'}8DZvOKq@P$)Ց:Iwh&sK<͜h-ʬ͒gġ*F< X l֩PxW4K\zjU}w* OӁp;@U ̀F a#JvQOrH!Xy?{ߎ\8#͖BÅ KzMrL zJg,0Oy K1::( EPӨt:i3aU zttAJDh$zhñlfK}b f[OR6 НTw$x48myiaVUW686`"H*hB ~ ! zE IFHBtZj W [J,~=!eFsuP6cxmT.U VՕzckq1]REk.4|!Ƈj(3LW-^no46'@sN*^iRTMs1Rd[ȤkEQ 7L(I~Vtpk 傼NH8*;=PYk˴E$- ˟HSpaH sSjGrZ_i$Bjc@[I$FZQMjw\wCq=*=fɁ˃ m\am?;.';6G-*2|Z/i@:~xWɔCEqxH Y(wQc0,s*0 qAB-ۓJ8|yg#Z0XӤThR&2 g)‹Y%tΩ63pE@J4pC$0^s|=\>}8;i5yV'll3O. A&LBd/M@ mQ] AUiPpj()9%OG61 |$n| fiN?FaO"x@pN҆Kp|qďP9$ Y z^D`ʔm4x'Miɻ %6˥s!% @M%˸nJJ78 ĉ9B.qZk@4e9PmrAݍu`Rarmɛq ["s$~⠣6Q&DRR&TypxC,ce򞸛*p62kunA9持-bA|C*ME*eה;oI&.E`q+Ν޺1XZLkS8/VctO U^< Y-l]H6TNv6^٢6xBm;A\ru}Gcp/:݉|}G@*$|1$B|F4yG,X=oFC`xi fG菳՛ׯTQYǟ>(hF?!q0'i.7 &.D"偼[AoЛ+;!{, S${5IYH@u|z<O|6WWCfo^4CU3˩}=^,3}wq=;k';O[T-mP3&,뀐ūr ' ADF8X4Od0q[MQ,,RqJ)14+ypRidqoKYzyN܅h֚[W„J%ےzo8NJ T7()sm\GoI F^y NlL T(N]mZH3]BThw ,S:WX 81ڴPZ&+Ue-Lӳ%VnNU:QwK/uuh݊B,—Сu;^oܸ?7s,>AP?73P\8 ۷fZ6I\u\K.R\ZK[. 6` ^"q3#vA츠2(By_{nYM):>?QEm8;le=", ŏ~<<׌{ǥ%7p̑tT?w&lӂ|ϋ8"@/wo,.;!T_Nio 𾛌e O9 a\wy~XEJ2F8Oþ?肹:Y4Sh0_`'W7IH }ξn̷jT `_$zt1G8h,Ng]tiaEt2L4`<dq !S`-3M0S.vw_7ϥpT\*(^ 0)w ƾ _/_fȂE9 nq.!1Mp9&?cǼ|=i'?~8OHYppY/ emBq,8@1>u"!72Lh^9T/8Ep N̚wfj8u~R([a[YX֔@IKmv٪KY`c2)\aGe@^|‰ u&sVՆzJVucuU1VGXu*8G}P5<0v@:&bwuUKiߜ .Z=T++|~ v K׻Z$օFHlB*n%=w ¨ ;x-T"f,ǨKkq-(KA ,.bNÂgIGdVĸV݆z_8Eh Ā35Vݡ]i}&i<}H'˄NiC5-ţ5P#W8!(3(꘨gvoMx:>NgW~\M'k3O^mK2TQ!VzU,넥U좝e#B:X8 "sqᤈHKG`ǣ'o#?_"S;wwZO8]F2i|Т ʊoyr4`?Ի82T: a+=B] EHDcSp'̱{ g즺 omy)ĸgb'3] grP,`BƗei~Pɼ ÓO&6 Ŀ.;:7 [[uڹwUA+j2-fŽº!c 3rk^zr}n֭ D`_9plMܘVsw< s { ;u%Ygl\7ndDdR*zpMfRxkؿ#uBs%֍h3h+SSgE!$%} wJ+xVZXU4q\UkJIӺ{~,$o1 $-2f=)c$:pVtӳOw1VE.fD%^HϪ-$WqYv鹹s?^'.e&O}4ua懿8lur}tӏy FJPˊߴ)?pU1)h}7w-=hwAC.=-F*Y.hhK% pX3['-+&[=VH+g-k =n|(cvAEsDWmg?A Ne j Z;uxuۧNrAͽWh;B4 ^@D<Ɵֈ:VYǦ2U? M9H68w)u˸^.}Juh[s}ۮ`9hVM&bʝ% $9r?0WT fXG=PVXN:Ehm#4}0gvRRۛ\{{˒\Ol]ɷQ&ڢ\ϓ 6ݩFEm>t/\'_zx73eOF-\I~{s5eτ;yA\^+~L#Lr)`7KjQ +F rs7[dwQkm\n"U.6ۃ^7p~:KI7B\ߩAsXcwHi\_IW͛iMhY5t2;mqBeOKh'yXds. '#Df# 4hOy;IwUٗ췙Թ)m]bIx#8 >pK6?IٰPӥ-hĕ-Aa0Ct9_~$^]w7 VHk^mlKd^T1Qc6% !ء%Oē|apFo֤+Sf/v .ٔYgr)k\&QZx״}beb&ry59|oLLDOM_;kmqSZ_1rR^M?t4i1j? !T볉+Q ׫ Bڬ#cTA^`/Z{RGa*&hK,QbűM,J˞5:£+WyhZ-Q8mWZi'kX8K ˁ${$h(ں&a|M}jEncYS^1vqƕC߅cZt,:Ь(yYHRZ醴Cn֠P6jƐf#⻔Pv#O ^%It`C__ \?/&SxE_5(s"''{rvy`ZLEƎf'2Aܝ_?EF>Dqv(sm5RNhegA4lW=,0եAy6=}Xиh]v02Fh5uuX[' :>xhp4oυls-ǠߛV3F9izg m׍OB^JԿQۿ7ݏVMתG8s)K oPm({ѡDy+>p_$rmI$tBdi'UƧtE-SoYbdwIMV" ֑ZrxZˍ !B{JJmLV&eSW1+AKxؔJ )#iaEpĘ @L/0.7M_c RvjJDFSLNL&D s +I.>:wM qYar||i{P+ ^]+ʣ$aiHH,V)BF@/u;b=%cգG1Vӊ䎣0 )$\T ρr)HJiLukV$h+ODO菻- g75ٌB(ZC-۸\R@~$LIlT&aQWstA`M$%V+d$Gz &l9jq~J i_l`|/yU a}JZr%=h}_͹UǁaյhW0WdE)4-g EhnVaO@ ^L594m +"V-GnGok5LXhIg췱K!f;tJrUl)i$o iqJ9^re}6R8o=EU]gpSP|)&R,#Pŧԅo3拐Ga^lMu^:/zo˽n7縗RbQX 'RAsLUwKgqk_$]1œ M[|"]T~Ž嚓e(-yL}wNUaOqw_ҡ=s9w$w,3Am\EBrm*%P$.E6Y:.TK:.fAdylfd$7j3BsJzb\4W,/q?DBK{J0ɢN1(ϝ<0@]Tܺ%B$KnlOrIըK}-@ cy̹Ɵ["#:ʒ;i[cFK”e9TuJ;8*gU]]|gEʥ/K_<{sīkɽ~?Gŕ++WV(N|R\>:ȹ=;O2&A^@d^JVx)YKI]Yl͑o)` =1/!{PYB1gbʊ9dܮ9^W˹0g5L.@ܩ 3ΑS³\x O 2sгB,ZK'(=6\rՒ5D)FnHB i*Eyr^Tʎ 8 Uqf`s(.n`hv(,Ƀ y.1B&Ш<8';2u]|`iQt)t0g^l.CB UJW* rф6\`\7E4zL{o1?IPSI\Qk.8gΠK ,:TRDXUXN|BbkiޝdF6 {/;ďѡxQWBIޠoX\!SQ02*HTHk GAl,t,Yy/t6u.H< k D)H>qqտ拧!w78>Tgw̛1$9ha!k|e *P,/(fKWߣƛMɛ ?zewQBFܠl Dvx201 9 y&|Ud'RV|i'"Tt/BEO5k]9#FYU^p2̯ nEXޓTވ+6lh>lJ{q &@{R'* Z7q[M'Oof\+/o?âF)[>o#g5dcB (WaqQl\5 $"/[N9 քV:ms'u1݋Ks}3JpX|lYuA;}Z*VBp'k엶#{vdKۃzMj=貿 3 kcjJu$u HHoWqy#Y^p`Fp4d9ϐ bԍuO[lFt{(u@޶eKdu[շZv,JȎK)hcQu^sxH"e[Ω w}pauPwKMvyuX(5TY;!9U?m(Ui[ Ǹ`##rϥ]-T3mJUͫSmx)!uC *-؈ d#Tθ[fCJF_16~$:er`](ŵ QU4ņT0SD*8ur7oU-ÑcEbue:9)z3&.+וu\^Ed'ʚ>[Ja5H.3 qjsRiC)pըc=i6=.0nE㯍sϭJ*#S祶(*RPWTe S!ewU֕N3wtO6͓uhPkjZ99&=qEEb[*RVUi,HXjF:x~v˫ۣV7cH *ĄbW3%V(4(d>FN?d.B\n}WࣨJ^IbK|`!ݝ:P̩$Wd=Rw5w p{c+$2?^djvL93b_eL MGUrq.;!⍄LV?hɪ, \- '8bauFv{܏:0*z"0`c;Qwed,!ǘBu@h01CF"Fړ`gZtqRYO@VwQxnd K$p"C ϧ̑T` 컄:J_N(?N7\+y@{v"%LK?%Ht5E?81:Vc-n q>.sX6p(lS:mWA-[X%LK[%p$n@UzFw% ֵ @}SgJnR?"ͣOI,GP`m_ ߽q*ۍ:׀[O"Jĭ>mL~@ UauHR 9{ BV[W=ą[U'c+ǪZs:|Wscd"5K^.`MeHcŤ>s[Au#͉N[ҒVO[Ҫ3!\\vay@Xu#MT鶆eK&T^gvA)rI YOv(*vFLт ٣тV[>AefsnY|e]P6Amɖ' [s B'(yp.hqRmv&0yg{"`F aG*ev EąqUsÜ)-DX!#]Dir(sˁz-Ƴ$\^Α}F;:BǍ*m:ɘ y[a.[]%jR“ * ud`J%Ḵl%^k]'\QO"K, Vc?"(Ŭ8k_pk븞1KWTZ> XWPך9Pٹ֦IVŖ[&JU6~PAd[o5yH٣K'9%{dIzؕpˏ<'1\] &}lPv b!Qe=o|J"?bJ2> աKǣPQD88F\0H#P,"-IٚJl fXJ.9GZuQYXHZS\v" 9^W@.$ǛBd;pBN -"P}lw<3g%~\#^[@Z9n# " +(TNjPq8T,OykDzz5pX>%xףs)a/k7f1q(($ypO{K_f{(tuu4.4OYQGib}ΚwNyfsRTRhNj'5"y[|(yln#C+0h믡1 3>#xCah}y}]/l}kF+ 7 4#ŖܭWX!1|ϸO4 .ZspP;5Y <] -pmVt9Lr7ԮЍr`.ux@ͱ n`?Z{@S떾 wd55FsUz(UcMF\u-۽DZ Ҟ퓎`VӼԌ3n/} 5WÈpRH|i/>mwVXC6|9g=5|l:6CaA[𪣻 aҞ:joǏhu[4ڋWwI+iw|^ ͚~>ᠭއV{ǣ4(MMMI^Xs?}$3=®uvZ;B٫}WFzMeA=2UH^M\?~5[7uAouz&gMKkq; 5,q㯿 >@y4{04AJ/o7 p.L#p׻5;P[V5;m;?j[sȗw[԰۩}h틓/ܚ|\ WY[]=50*UU[:[;s+!d\e'Mr\Hnv&aj,͇#/=9u27@7w!ظuNaɚ 4}\}@{#kZTX36p*ʎ0QX-]14k4vǛtN2.A>DFyl/ڱE')$bx T5tڊBoWM@6Sၪׇ{( 0yo'ѯ=5CF0b!;ӂNE80- oa"}>iF7r]3MSuzwN&8~')}MvwiKJ{;Iv@O\OqV&hC5F?pMrx1kw /.68ͯdcRW4Z;r0Gl5[NJȌt6O%2]ku0ka1 p0 &p Bx!(TR? cN+]V  385mSx6lb,FLQo_rlZ T {`Cְx &ւ9 *{`NB%Zh %Z$,p .I+`CGT;ԞTDAŠ}5>ˣ+RGZO8lx<0|nJ@#cI +>,LE0?^n^ߗC]MAWBJP\5T:RtCdͨSӹ? x(duAv=u9z \y 6C9J5#Ltĸ ^JrfQ9ųT-dd'UTvRNImN6lTIUe})v":p!plDƊw1%d6°ӕO0撋g(p'qxú?aθ?$#xQ94%\Fw#ڧ?g/& {/E2ω&8i^0y~;xB( X70Ƞ(>՞>-*iD$"!/ b?㋟ƢwGLM!mr&\0HL& ea%t{DkP^롆;ϼH4҈ ȓ( B"9 Ev{1]\jm<^D(q$ʡ:D !A <%8ػ6#W;mL֑u(BF^YNk Ԑ\ (q 1xWYYUfU;Bx̱8: >* r*aoG4Z|I|̢yϠ%vewr^I9󧕳a?Ӑz{ JzecKBN'3MT㷖<3%~ :|=1Ͳs;pK5#hiA}ߏo_ߜIџ=lOIYQYYD걽${;};ekz^2LvyEޔ$HK>^!!䯁H҉H'I~n[%opOViAu;itbˁeh^v ʠq[)iz?1LNLRg~#+DM˝v 9EɒxD_iIZK! xtR2 P'MϦ卞f,հy=Ksvgۯ3?5 C,~ i3XwʉX*hSm+4^Yu V.k"WBͳ`ۯ[k{?%_En>7:PfmlyZ˕lfqZ0{}.k;?fRh)OIb{8Ko7k2ap1}2{ WMS܎xk#=APyM"UhNgNK"tJ MyH!Ur›NYeZ%l UFUK{F^i;A*[>5T!:URG%*3 G_cR \[eRBqq.JDU#*uș.R$_X{M"BAr Zs&+(\e>\=52Nh6JtAEF;gIY]lRnض`^b)^z 5+xrqOL($>I٫)Xҩ)V1<ʋ>\mm *564<~e u&r^[@X"_sնHshwޣqD`xƑwM.~nM~ 7izݾkf-@f'+tK"Ka\Σ\S"De+.Tڗ䣨9]2y%KNG V%\3y/z1ee|'u'l|#xgvsr3++7uE}GpWC5lP!'Pa$2 }`TEúDrc w˅ItiOo?_rAu{8퍟^7_X;%n>m\J9-яU)yɍǏH'7G?ynơ&75S2l=:~R汧G8i>\s=̦9̦XMjG⸘ZJQTCcW-$d 9\lU:jlJ :4=PmgFj: ]\B&Ʌ!Yr|T[qzҘ ȟ4VJy5:ΕXD9d R4Z3٨<9fۭϺ^ƞ 7e~9o/rښo:ݞey$X p|ӷy)moTB}Z~uou6{cHoWZe}s6W7^E3|*Es.p|㛁[lkm!,vٓw/'JSK -J|[\S*59}6VlʮWӓ~QZĪ}gR <3sN%x<_EEi#7oKhsaWg--䋫=a}yvŭ[{xx*Ô~?eN2Av]qKpӜd|i~`Oe0/(ggu5y4ϥ)Q|uz5gb ڦ\N)g$3B%+RR޳G˒bOypnF|RRCʶmWsC圪HJ^U*B TU@x+APҊu %A׶rЅ#6Tco4?0 mLA7_kеееЭF!{lZ{!& 6D춿ttttebc=lb1kmeWg˳g*g{ 1f쥸7|Vۛ/OiJe|bb*U[u rv%4 rpRr%o7_玬t_c`A&CUW弤\2 cKܔξ>u_lg{%ou='IU' %({L+\2- ]>(Mr.L#pU$hAњ(Uh3dmU%\^0TUK_)'FۍBN))݊7Ѕ^ cL9Kܵ'0`.;:<2 j') &k.SBee$Jܫxtnb nfz ρ%cEdbxTe) x![Rŀ"kB=1X☙F­ 7,r#AS<%vY};eVJي˱zil\3vAJ dpڸ: 7{e2ӣ-hd*GƘN805О#Uitpb(挠JQ䜵`ܘS. ,S*,Q 7җ'^~?S?,pg]q=hԆp SFI\^`\$ZڡHL#uGZ<K!Y={pЎGO=jZogZ[p6{ke-Ӌ#ЃY6h6gRA!OF4A71v?v6.b+ [;;%=j_pg&~#9 '9zBBW;t D1*JDD kXRm0٪"`8dcH5|deEu(Xw]WQn8jY"h^qAvp 0m7%Ԡ\Uoڹ&g5U`xm+/YhČqXoǛh>0VZUjC:/Ye MN'l 5%-n"i`'v?'M4ZAu- K 7r5uB 15+VhTZjZvSBP( *d& W*R  xr\0$qcPu]$DdAk 6RJBa&M):rdh.Sʏ#O!S(+8ߎŹ#XU!mӈM)$B n?9 dXJNdX]4JM])ДPIqyGﶋ ]ͨ;iL}.ʦXVMSSgY W3MFBh?p=-p2[6$Өm^\v[x]"˒ƄS|)|v~$xZ}dIrV Ukr& Fw_`eET#h+b+5HHVDY wf|m@ȿ_I mDs]UoF4;W.˴>!FSH%βL lių!vzYHf!u$I%0McrswïXΆ1QB5~8WX5vq|8ıs #ƺrH9%U{DcѲl.YqgFsR7)ֽqM4nU SDEsddQ [k\?ҁ; s?jYtVBl&2hDXbUmZYR8o(_pujM hYĆD]G.VkulU*ϲ;B>B)c?T)z"RƘ1jqZbTPkՎ-0˖6?W4-qÖ]mӦc6-vtw|[qǪ5֋qTuc܀:@(5>i !n'/?v0Q(yJBwǟ[(ykKS靸ØaOJV74uRthq_S ŝnYr=̬O`~&.+݅|,WӇ7kLs14tWbBCsr^Al!UqSZ:'ifɜ{sC~nJڟtjc8$t4)C ~)n0ƒjO,eS6&za77vb6I rç/"03Bf8#1i/p#H^B@Xpw$n`vP+PÚT,cܪyZV˦H;Ps|` )B2n󪞟ha/#Fm KZlK;.V(zQj3urD ]V':jغ@RV_Q0.~>g}{xSIac?$)mf''V+]hh,A\ğa^*EjtPԜcS˅9v@ü-t duiA?S+,}nUlRMO^h` W;णUJ|Brrh_hNX[>_P =W|DxDV]$[2vAsl^NHU ' C#Q&Z$w/sGn_1)IcCƘ⪅C.Hɣm%ġPy@~P5R^r4+Fۮ/rDr=1[" |w}v=)Пd uHec{3m@ Jw F}ߦƏ>N7&grW!Y*$]dr `?{ErGW[ sK+%KCz~`ЧO'wo ,`)8gm6e*qZMzS-0{x>BtikMfb)z%H)_:pKQ#+|IbjI I+Bn P$ߑ^+KX1y5?S%i?$`դ=XN3I1Uvk@^5r4@ bQ4 {l0K +P8o'ZWGj2%?>^Y$wed ];GY8u*8eͣKBJI&̉ 5Hr3TIRj%L2nuQ=89^**n=XBaghgJl 4gw`%d xJQkAɓPY@az2~w_^8>WRr܈cb1|~4qccx/nUchZeP 0`"%(}Ns ˈe+' ! ޏF^TrqyCXg{11(YXQW($ऄ<,4Ef@1t^m_=6$pzsL I!O,w$,r98%̘4p"A' &2:5ߚ%SD)zkAtCI0-x: (ؔv&ܑ`Rb]M̽QDW> a8)s>j нlx~7ӡϧcט<# "c;:ޏCƘ'G{|Յ_ r}H(Wkv"L) Օs*|wMopO<ήģ 7Mh*QP-1E2*pSw0آpm<V:x@G; +WdUVвzEK T֬+i(3᜔X"2bC7#d;SG25eέ4|pD=.#T jT0Y3G||$Rcu IQX]ղE ՗bS0܎E7?+7FV%HshVsu(Ø3esJ^xP`ܠ3 iJѥ3yN,\$l"z?*۩EXCZRsm5 js^quHu!-J צhbq<{R)4SPeᎃ9jr!v9ޱޫ1FwȘ]$b1|ݬuFTFqbks#boIXݷ)_Xmdr7-Y~#?8f6aL`֩m@]5>XHyCOCkQp1hg߾mc+<@ZaBIϨB 5T?wWsR]0B(LD<)2:l̳.zbΪޘ'i!ybK:P))ӇnѯF *lF/s$2.w π+ƑpBd{590)̝ $GFauWwZ>0[ˆRF8Ql\0Br^&)Z2[8?'6 .szC.Wܻ( ɇwɃ oD"M'u!rJ,"6Y:ZEs[?Ax{ Nuƃͯ[$6jn: Z/xLhGcY2\ N9"BH*0L'^Ǽ}ܘ0vHcMYs2*el3lgv;:> hf?ؑJw-]lwqec[sk =}q]2P`P#owb?*H׼X:\~l'Oq;eSRtUKn+nE6`[@t$$C.DK cIpGW'ct{Ve0R*?_?Tp~=<޺rˤ<9$oKn-%^O2400AFNua=:[:s6w@oiђ>[P=QTُ 5Qu1+OqE(5>~]1d"b>DtOބoը". Y:+6xGnE03%q 2 uζy 'R3NWBtҎ0HCmTsI|L.Wgofoő1^r?^WglX84D<=|0?f-GZ%ضY6OqN9~^krBpI_Ɋ/ ~ Tj9* ]ںz?/^dWar5vWcy"#faFR{ZlxKGؒъ#,E1J$n"ȏfБ'fWjdZ nkͰ|l6f: lrA $#\ 鑅yy760ߖܷ7Jʷ |aosw،XN'63pmC(ê%?ԅ}$%4# 9$`$XumĂሸu +UjðyZ2( q+NbO'`04ďiO~I9x%i3I=$Tbő' wxSL G{[Qf.E6ԭK>]N()ycfM$}u+1@=QxWA؊1#BM}O7䄠Mgct-z@S'&P9bDq" ˏ8UZaZQ]>&FXq lN(sTҊ^S7#'⟺Q4?|s蟪QIԫSٳjIXRqZ m,k<]'6OOnp|e'߷ߺWe6(L9\Yqlcpq-)uJ#Ԇ*i ޱvllPVXș"`l 6]SVo>~hv/#9(Ι` 4Epո&]ͮgCs?ӼS}ss>#@'[J$d';eHN3!o^O2KeJ|Pw`Ru\Us}͍Qpw>΋KQ6oFpڰ7Noj߽j~4)(җ#(fO^6n*ߞ*~a7ݪ@bt& 0(YXH}5 3΋؛t~@{ ؅Cd>2Al012_z*d&u12wkVkCF^r8|9i?Ad K>9UVFjt>z$krfzQh||\d'}?A)<̟o5~opaUw_KSrzn;x|X zMΧ a 1PT,86(&si)NO7/vNk0Dlx6Igg.*uBrD 2Z щcH,R`jF(IL ժ*-E1_ڌ:ZuCE8()B4'f{۶SJbN3r_ѧ&;E ` ݹrsODf>^)G}C.(n \RkI{ ރ :a G_C6 ?)..L(Q+ez/ұv_I6˳V^n[ػ[p k ]/zm&tVq>eǿW쏇IBv?4%8d`keSpnGƛ{ WUʕa&F(NHrZ8cCNey8IKAI)"J%^dܗ6"F1<,P6̰XIL m`C5 %q" $T5U]j3 J=..w~| J.y23@8kUφ}w5 SimW)~n>XYII6.v @ w@λ [ ҳx@^n$uSϊp4lgn_J8?Š ?oCL q)dY=uUэOc>Ds3}9_U柭I2$hD;>1 Բb:`:&  3C+4TO!;L,tWx [6X ə@zy沨 8A$+v0MOMbTiXͪE0mRHS- \`1ݯ_~?m+mIG¿(_e~3ݺgY] `ī[ ]P:$+~q~7%yh9I] ȧ6э]Fʸ燎g~L o0msI$pZ~w?(d2x7L . @D+hv cR0"$E^mm/|v+20J[hZ9}7\lPmv:1%ZqR4Z7ѕ ]Q*㓲`ލS>hf>#2ݨy<,xq=MH F3ѱ`xDxт֠hư1KpWSQfB,/m0͏a<0 |/ٽ'@fs;JXlΔ$x]Yd8sǯ_q6ӺypNDBs X8Ƥ/ RwF lr|F;H;nm!֯hhBǹ<hȗ x Q uG/d J Ӥ*E*l 6tTXlolvr?u F{VlQHF$d(+ֵDϏne#쓞it>7RJu0nFA)d4R9A!hgw3_cA&Fe6{{,4,ό,cNO3.RʎcvZAiIMo&_[|ZCEӶnd}qG.;@U `B)[vGܭp 1R0#3Rs+ᔘrm]SΒOө?[s^~U9й)6]ލwm}R 0c7PNYywm")Vܡڥ_3$zF|ef|Q9Rx83rT RUmXHM˷2Z5W>N*8c쮿,A<F2F?l';tYnXQζ>4i.ŶdR;MKGAH| 5笳щ!$>:,4Ga4](1nS/}V.dSYqNeӐЭX8Duk< `Df1yBf) k{ԭx'ԩZĝŶn7c/H֥{yԍs|ݧrGĈ4kmm#ٿ_tjNGUi?=! B691C#qqe  & (G9J(cbJj49̝*t+K=jN>nd`NPK/Ytf\L՜6炴~Y^ Ek8+:T8@Wtl-sߎ{D@XLf BzF>a,:#x2:0Ubbz٥xD'5;sLu*$HFBcV>`SZ9gD5-xu0Mw' 6/y_$d^SEUG.o?O>2|݁:.>pԭ(}od}9QxWkrzl~>wY#DD+O8l H(Maъ FnQ9.5NUț^#QPe8D\)H"JK6F vw\D)JtOm*s{!O0N/)'%rEPBD89JȨ"Qa,7TA >Hjz^Щ?YH)gV[v+Ӊ:ƶ&ޟ蓁4gYܑADrE+Ep(p[sfx -.cɍH@8i愕+Z2YiC΀рM'2V΢_Ȩj!% k|(:!% k|2PZ^*{Ma}zM::g۲[8Ƴ=E)f<RLAM?_tQE4@*PEYFp?] >}yI\zpr%Cҋ}8ݍ>W>Oo[307j @47t8WN`=$HzFK. !@qD F2 GojVok!tsVkO +Dx6:UPL5hBI'}Me=_@ӧ!}w;@IsN;l IY}-BX蟋.mvvIm{?OM'cPQ)ɪ&mrr3pa##`G~^\^Tsyj.57rWr4 c$xjS1i+% pY165W%W̏_Y]]H'C $rz?NM+Xjڢ 4ߟ5K8LrVf8|QZZ|)tn ]BuP]Ԉ"`LYzVAxs+ SQ7#n=e\^ Â+) ךuE|,E\iǥTZh^1 :&HZ 1Ġ=T_-(`mjmߟg+mÃibYU:f0yfe)UQYNrc$(3^F$Scɒ7 ~QO;^6(J>f{ 6UV9(FM"ءPSX Y"D,*$qjWiE#њgZHقJ)`w5ǜP9ŭV ZX^%rF$,A1r;1 q$P)rPp!Eb(T-!ʽ7#fAG *5ҲC!X48cV͵'253: (.CQPMa=ז;+iOMBz˶~ \|#ͺʱz)<()D8%Xbit4Lɱ+E>{/lkhHk-I֩ʨZY 3k_N"j:&/Lڠpq{cy95ȣ4x" Mv pa't D*p۰LiO]꩒)1`eAI;-6; lY|xKR(9, yzEIW?@3~O?ZtNTb4:o;7f2qoCx> @L(P?ax`80,mKxRBڍ8iF$9jr8(I$'90CQ 4mz)GZpdTpԣSd2ԍx(&Bm1CؙqhRbGa :URt FuLƉk0M>9QHeP &*00HUկM9rҕCE楒[}PI~ /(odm?N7-*&`x!Jȅ{ڕyӏ_5/^?9y܋ #:7)o7'?|󅊮pm ?{D 3vZ/?\5~HϿ\]siLݾH4a;aؾdyPyjNGJ.k1@K4Лtvqs#\Y _L|M9ΛrEDūDh]r؂^^|9ݍ'R3$brE)>XߑiQ&@JؖS۹ĿW.☩ OzKWx^w.'BuTw ' -uzaEgNMZ. R)AB@uQ!dk54Z\Y9rF1wkO]kΛ\`\S}⃄%t2@@~2{UGnHI!<` p7Z#Ƹhy'UKI[ygk$$AXJ;h[px'Dt96M;߱asTӡ0-lIܦJD{(s5oKywgj;xRFlp5I$-5B+mi@[Oh*%4N"֊ G+,X\~65UsWoxf@mP0)W3mɷ7O 덒]7ջspG4ߕXˊLߑxTsvl:>/ EHdzICW}^$A,V-'Eg<FX$sfn-g&CYsDJ5,16+8J? P|2Iusmg@tb#X| m0^.hR;ɓ I`Vэ\[Yg&ѓYn]JIeBLYN1$/zC ZݜʒId}L>BY Qx0u./+p4:0Fu*UU;Z%%`=RVN96~6p[7l&W~VbJ]NQйR[gCwCu0BHk<*_ykmb|ω6|^ 2 mFi=O_/vWثS!$H=uII2/rPByeǼ}H1*bFJĉgFZgNj e`p52( }{/g4EܞN]φILd̩E=6i˪KnIkfI1gӵ$e4pCk$6= ;68x8V[qIJJE>WR.kWR>}{SG S )C{P]xq~*' l*Ck* t$ 7 ń׭ݘ0\@)3ypݢKl+2eH  (]:NiC+\"9C'H\.mc]2Иo^b̅ьlRlt؎| *󮶗7$Փp0om NW OnT>0NHZ~R 0JjfN9.BҐ7O=ڴI&u@:UGn"%uV:}?ii! APa H$k-t@v>%^u6q_ am=2EʖHйCaߪ@)=Ἷ]j.y\MF `h98Z{f7 uΪX2}SEwv2藱(j}GY9a^H-Xh\{oBH0߹GKH::ۦ& 9'*lu|PE`r܄F}hr#R{q̺P~ZkMŧNĒy].gкWS ƋT\TS_ȑ[Qid!R]oMREh1 *nIݍ*$R;`b^^SHTYA 1Kww7@t5cʻdь}WHj"I@z"a0rLgRS\!D#8)y`DZI+pʜs-Q%ǕwC! kPYfIm-u9󘶭AfvmcX[&h#M*uMY%#DSK1FNYm6C# `.3A"$^ z K-!/Lxevh} ޴36JLDtlW޿*՞w|/eNS2uz9!bz}YQ>+8rzpfep2 mNЫ qHph}Wwgf ydh@6t"! AlJK5s@[S \w3$B/jDI:/2Apݹ2w%o(j|wnbr4LrwkW⻵֋OXn-{-L (PNwy˯}O?KyO,,[K+,N֊S~ih~c*uhi[&י uf}@_@o<=H56( ]r괗 ^| w^iS=v'y?| }'4' s_V eIxs:uAZzJlSceL1P^39%_HՊvA$ K"2I[V]Yː/eh)&qi0Caf}!MyLHŊroK%"Ds%yQF%H"ݻw7X$ VnNvu e+s#@ox|h``,o*d~:)+yj.!0|ք-@\m/8=|ȁm@FTwXqͳys2h6O%F9?,]pU~ U\LJap3ɺ)?+7roS#8TkR_AHR3Z:(k| !f0ͰeÏZ>˯A݉:1+o߇V@2|6kK ɕjҪz):Q}]uSAq "@ئ:8ƓW gsip0Q䖱_r)~d#r'2} (--C9SsncoQҚ+3O I0Q 8*"8W|8*s!EG#S=3z-͘!bM{/TXf5rPIIU} ?Z8Pi?kq)-`Kl3$ؓ_ al$}7p(XIO&VK*aIɛw5 (sܬo*8X PKӹo4+O|oe 43-JNIˌH&BQ .6iD %):tabomr,^_^d"I"{ ~!K"bLI(&b2Il ÈBY>d{#j6'\޽Ќ B21VYZ"QECkFiJq2Qʬ[Z0q@oû*XMF5ٚ{TPdkpㅌ#JMuDFcJSB' f#z+G0FQ B5 "1,HHǬв%=g4d3P,9l;q~c4 %c"7CԒxtWG,>-QL#$8ru+!uԊYl 7rCq`B[*HNlL jH}qcMհ3kxXc~Hͽ )C|%Ta'7kb"+n<9"x6CNȳEm` Yǜj40t.ň\!%"P9B4IF?1!NHD@HgUm}@]] IPcxJ8iTSSiB'L tػIY.z^&]XVf}z>V,LioOo &$Kot~3E/6L$@2Rb,Moƍۉ+!$sQk!ֹXͦy1UP50QO$'ym='}C.X;a-QBNJ4 | *0KaʓDc ?Xp||迯'W"32㲦 ؜ Cl4뇥cuҗgtӤq5d#Nh}x[SW\pT %m䍞-vq֞]^;w{tcE&[#|2~5 ǧaf[= w6MGmf&!#(g \Ly.M}^Zbn'bN+&֩(O0BD3 @g8P9P/#!}VҜ{s.sm"-1߄Ut2sbhS]zuʎSQŠb1zpI%4bvNezcҁ}TjGzRYۧztQ%ݝfe+HlznI8r-VH\ 2VvZ6\o=MJ;,&w$=\;m ~K3$iYGa ^aatWj ^G^11P+ B4sVJiGv<:#i)T)qDs#ˑCe3ߎ_ AZz=1=uI~zw_]?W8lUub&ɋQ_Vl{uw+d34WJ& q\b޲5_i[% AA4`ApdD ާՄ| 7{cNn [BdK۶' wo]~lTn,թa$xF Q$šHQH iy"R,VB>xFn.v4ţy􈵥f6꥖H%) ^oORR͐l9'.o8B3՗ IEKuZUW/EJn׹|Q gx?Ùl^*bwu*NܥFNs3ygg&j5zo?TN΋.hK萖=( $REHEHs$w!/JrEl#"lmr/_\P Qw0JN?iq+71u//!Qq K1!ll뗨:zIr8Dn!Hr;6ڋq }./|5]fڣ>YMqz)o&^' d)EǓ9HfWXp#d2,|]YʑYHXh !ilt&a9W$^,`CS-8qU1¹ҨDw4?D¤k.W c?ڹiwl<7UN8 "V9șs.L'TK{¯ XPď1QSs]~6D6Μ&y7Ԕ0N ;u{K$N,ev;G-2i |)eOdv g'wz{A㏍Ow)6gY).!ܜ_5BB9Re49䂌F{RƢ3h}0t憶Q>*NtR&N:7v6fjsXc/QsŔrKZ!HRUd57 m켰wzJe" A5( y6d=J'Nm.=u1ϸNvɑTju/_\8^~5k*aV;^69?E*7B.c8#uc35c?vzxSw_j 6hLB~^UyAޛӱ`7 bTmcL1cvûѽ5sxH?1gBnHvP:푆[՚g%*FWk06Ro{iDGTЊͦ=7wIfZߟ|\<nw~jn4 }hq)nݠV2d㼱¬yo2t2_sw1 `SzʦTNː7eKO[ZjDxsuuWB93MI.5ױ| ^?+v#;&Kt\tuj0됳7|Q\8_t1}Z=FyD$'E\/mmj@s|R 6.SB1:<~|r7.f<6ǜ@"ɪ*=6eru2`0!ޓK:TkoU̼Yu ֚7 H.;gYY\lՆw^&P[G*@@A>*,5AcmopFwߍiVދ#I_4<b3})1r0ȥ3V)7eW{tbr-GNy;!]B/ng)C5YF0$nА V*BYp̆R/Do>|"l}qC|:NN-P~y}|8^ >96 qQ}bA E8w4ӯ3*SXqŲ#yG"Rg_i.HMS=EsyRr)u _~L]^i" ̍F7`NliL]`9HFVI!yW+Pjg"jK5p^y|~%3Y/ZDS۟.b>__w<]ιflROKg mg7=μ@ph."-iJ6OSyӴy֠\#oK@]EeRby\76dq=?dzxV,.A% TP2@/)p#G |}WaZ{x?>|a.ww ˹4gosEb-ur8eUS%a6Ũ"NkhC6\*-K.Tڇ: Z F1%9Ań9z0G\1%,]ɐ p* Yѣ—k텠F6&j5@A jGSmTpH̶ah|@yِ!,yHHh j2,cb3t0GUʙ=MR/2 VG Һ҆yO|"k`jvdBx6ErEI\\}_I<X#X{x薕I~_FK)D mJ 18Ir"&Jo#Q@@,hNZn.h*  v_\)*]wkB='lng%Z2zvwC}O/[?ӓx?fo}=/3QǿzI_O='~p>]6\57mόM_<$C{!ڿ cO;oe!Gk?sKzLp܈VkӓO `J8_8|[=#kEfVチ$ ǫ৬x+Vjw'ŶJ/띐Uhox+KinkZypTX4U=)ieC֓}ri{'i ց@6Qም U8q}=ҢTԔ_|oʀCrͅH@WNK^[fEmbӺv¯Uw׺U]@B.u@Z Z?wbPZW=E3p@kg.QS#᮴ӚLCD!-W/_e@F3u2)e:#. e*%@?bPUv8MuTH--NW.NOjsMuzg?cgGT0$IsvY}ב0b3k+ C2Oy(8ݶ{NVb Z%h݉@WGqݦzJFWi, Γje^+gD02嚱16-Or"^1])vOI6eX+?_~mYEd%4ƀ`˒# x4^헯m݉$1?xeO**<oGMwW`sN (>zqus`#r-@3kNoz0'm?o ՁmgY,K|f[r_"thZJ ފN]IKovPM"淛 hIO ~VĚ?U\+Tf?1ItYd{zuE1_W'į{toU8C@hDB+ݻ3é?wb@ѽ(jJ*XvGC>*:BR(ՔԌ[?Y#~p!/?)/15o;XaSH*ijB%W6r梔eVB ѥ$xL,Y[R c:f1ԳvLƓQI'р&1d3s *DK#)sVWijtY2;uJfֻ黺`uO[%Fa|)sXZ' ޻`zLgxs* +IE}vr&0k N$qGI3guث+'&rN\;bL0 C6VRy"qCGמ"(H|JAi\zug}&&9ƭ^_ l4S5Zi3f*[)sHQoߴ2^xyQhQcmmV'TZe.gLn,NaA%l. l @]s9HR>PiR(M {\6a &KLzW(UiBsbOdzPLKRi@f'2iO LSOM.pv!aYx:.gy_= _; ?]{N _36ψٙhT)W7X=Lբ|r蝦g5Ouz肬u/da]AnJQKUR}}LC`$J"YJ`Hay|X *H*:o] +)|ckǮi^XwR\`E哇y"zvQ[5RO[U-Huz*(jmdu<ӷF)\Z[r3tp/;*p+)!fA6rݠ`L` BY|!J}7W,(əчrwOTaUD8RC+M{Uvb:SG3Nn(}U NHGUurrHZqju訓w#!:)聕@W\1`rLUxKt::Wɐe&Co\"زEV#ƣ($3eL:D,bk'DJEH&C:U̺Ж$JT㋺YӁCKtYAzN9E. I! AT SYt%`zJ:3Zeo7#WA/=2ĻXlp`B]m%lKx~X[Ϩ[S==6,XS&YGV<{$\aăG9xBǀ|Ìڿ(#5+Zu >\eyqXIQ7#Z 6&SdIZm8)X*bN\'.H0Q SLIWa#AA,ϡOهZРgd$) IflJ" (1E*q )"a@M0$E=f4zX WaqAY_](Q ~K~bm1(b>NV aBʒ2R{(UJ?',kz {]ҴfO Dǫ;w Fce4/~ +;F)GiAilpŎY xsJel,L*vC2)Aj 6C`"L@_Gz=+ܚG(Pno,E:ދZ]4hbtΰB|BLu9s0f〮5c4zꪅU_%fuDR56_鯺 2縵0KCR+3"ہBu7 ˲'e+ DV @VVLorK [M czxv3W{4yiaM+2{q&3Тީ)?>hh`HD hml}c(\#9RK?Hc8!1OgmO^)52104S^g[kFۻtۦ+hAu"^*uXv/+;_$" (mag)654rx8^ }&]# .^dJE6yu u~r[KQ{CrfZί YQCVݺyvQHږS9C$uou~-gcW@ #E!ֲ},yÆ3KoTU u1H p@sQ5 Xc,=>fըד[1^fheeI1 h{X;>Y RKݱkHA&Rq%p)0^ugM}b5Wf?+-ROXr˅Hgg+Bۭfz 9BZc5-Jݸ2j(w8JɊmZ)uXMKұ°aaQ\H'Y EF$T{(ԛ_//t~uu?EU>N>qr6gM|FDg[6#w GY{zVe?75=u9SI9a $ڭ7'ȏ>IGH ȍw'ob /v7wܬ*?N`G /M~r,6>]^L [(8-"NKvտobIZe)eBsnDD\$ZH32kѿ 狣Wd2B~lhD/JOGIWȓƣd_W9ޤ֤]9:'X% oSl4~ZZz3Aƴ-;-@NЯm&RZ)rtɀ3oк ]HE4\hs*Fj31vQfbk;rfDnl{ ZteŸ].|{#*P|{) &![}Zȥ~bʰ\N4jэ|cl_aMjOdk"p\h$:Pjfl&(Џ/eWR$TFUN1ag'5:d/P_\EQQ)Cd h/˜H -DPơIK$eQ`9scq(Ρ69 G >!:A+d9ˆ4iI!EP y Ihkks+lg_lKd1!:4HT(A( sp%C+|0G3e)" L"BL.'Dd 3#~`FK+F:FŌnˌV8Ӗ{2GN T& La`>8ІaC"b˸'u̷5|(2d!kHPfNB" YM^2!_KG FO)79"#('XaPVi YPtx"U.rJa_M|[ZfdMFHX$`Z'384-k] si9D)I"5j#PυQ NkC:-LȄlbD.d"Z$SzTAJ9ɊCaV)=!Ɯ.6?}8-wWT\N$_M˛Q =`ŏ/"C>_}wqߖ1食%z$oڅ~JӷGF?|zc#^,]^Mw>.H*@>zXى KuՃcP(㚭ncI>`bʺFj8 Jz( AZZuW) SlcPCjvh.YLieZt"$X h:n-n5\npPEۢcLf5=ȪG@!zt3ѣ!-Náb莏zĵUd=\zKϧGB\,6 K>SF>0`cǠKG0M[% }hQcJL7ofӌ,ٹ^DFK=0<OZ9ƖSLPtLJ"!IK{r|iQ\. Udž~6%9%7'\(bpGwNJ%j޾-ypgi ZH$;C\hFV.B#W״Msaղ*ޒB}[֤њWFI4Z1&=얔cg47W,AtXk2:1uceMwCP|ԗQvлR_[_8| E]\ e ڲ#6FH gQ&HyZ^M@u fowey+3P٘o,j/8fEycY1Ő /dW3O/H|B`Ԯ돓O{Oa%H=<K7՜Rp%wr(KpȃVR%vpwRƀ잖/wNHG $yHI: 1C'' MG-L656~wcĐ+%Os,zHjCjHH4";4 潃Lh@p4􈓩sDɠpxI18N&T1~!G} (kMaX5AGM൴ocW܍#ads0+Ep)mnY`uW(}I^]#y)qPD,cvZVQSdon^-`-Xx)vg7j"hFz DR\?Z UKbB ĠX|Ecj<_ZbP 'rc#&9#Zaj{{E)j\cxI) 4.(UddaÉV8WXE3}m>p.B8ʑ>k[f8KPJaKj@ J.|5Dƕkm {9̣4C h"L+DB19d̋B7+P@j-a`9)L{ 9hb_>8$ ^2ܝT#'(\pB,3lr"nRH`YrFpdrQb)]wᘨJYM/ZV"Ƙ+ GjeX`&(N06B(x=gY5уr%x(=w4:Ypdh-mn))AB#'B`.C3u>3G+-٧ٍPnAPkoS@9A4+^#;.H -@T<+@k{]XeW!"kz,5wYXmwY+.x6yP7>~XoDowW7 wc̐Y0aa}"ݺ`[zv 3nơn}Z0Jhd F# mSԉ}DV`"UZ1mP49 }HM$xՅ48H;' Ltg8IAa}ե{@!v%&"q45eXjō" [P\&4Y^<p ՞ׅY @KQdul#6(PniƈmqHr,O=XpnݟR:rͬ-rBqSE!bPJ"!j['8EawWΗvf\v L}/냎uIy_*f?|+)H<~{ƫc8_g?Oo,F""C${_77g7/k]z:yI_1!ĸ˿O=v?λ%XJʷ|AvFLܟg3xyQL3D9" Lzk4WnCfhӚ#Yyv9o܌B9!uvmݷ{n}죹gj0/ ,a͟O /,ވ8Lf`(?]7kzJ`rG hh$mӥk+[ /(Jc:( N"z(XU FQjKjQ|:( ^gTX!)IёD٢HZxEm2j׸ ^UX8J[8hJ+A!Lj}Meģc @W-G@t*IG]e*P]X> J>9 Ijc5I-Z*A9k`㉌1 H-2ZL|iRť#(bI5=OeZ2hَh-[Y$cqtcoTzAE}\8? m4: RFe`9R DՏ~vyaЅw3&d1W;?|xٛˢP٬kSXz!X#͇?îxU}%QS3%=y*(#_|1=s'Cd|eG!|铤q֕Őϰyzf+cO_ExxC;qD{CDaHԏa _Vf :ޞbenÃt ^VVevJbUG틅oyzK_,Zh~?h? q^v-Ԫ9ċڑ "4NHOO' sW=c{ohFXe멱 XZ,w^zj/=ujO0mf!*JKsFQذBZs.Xq MҀy.-^c迯'UY$E'~1aԪv6Y9Y^KNWiC[1< :渗!e>'ZP1"7sgZPN5⅔E^r rN cR["$xmE {z!CAo tO ! b =>;O/a(ukliF^ xʔcongٍ{Yͭ&3Xfw9WޮhP<)WUVQV Nnw K0*˻ׯ^a7k%¯W:{=)di&[ Lǿ /VY-٭ˊ>[ŧXVY2g>QLq.3fJF.4KYCxZÓם.53g)H`}T`I)sr96+PRkEJE)r.qID @O<3VC8Ig`^O(+9K@cȤ2@sj+"`5疩5O-C2)V'^1́"&Rt:\9صUr 6H Cg80/kEmhmSlmާ5dVVi)h֋zL(W be+ԎGoH]JJf{vݔyEl j)[swRe{| kͮ?d`0HF().UJNXas'J)o%. u0J̭V*xfNY)]P\B wWI6glV]`i]iՏl jw N P6z^c*XA KTIREuC@UHaD+u28<^8VavpHVt w׎dFEb,kբ~t;1fIbqVR \gp7ÙVbMRWY)U+U wCVSbem4Q{'͒s}=3yՖ7 #ovdfG!'##o.ٵ<9]/>O)66 ;gi#8縧<jд_W'ܬ_*eH8fJ"z+ ¡0cC$mclt"2H'C\8u%DܻF+BNxi7'y7 g yX/*ES27T5#}q9N4svבڪzq M[/ACU7;k)RU; OG@P|qQةPxJ-ކ-Iv;[V*awYE=,I%#4'z|3t%a6Je0R2$Rr[XCVa+ z\_xau[>#T߸و^ %)^\Z09!)8"Eɑp\5VFhrE אa\iՒ6Dnj @CEZ9 F4@A~A#栔rLyqigz_ިe‹qc٢6⻭.J{-VKCv8+b%6ң0OKjrЭ T,>EYɲ !dlcˀKqv`z,ttb 3_m.x{B@$6:*&!}ؽ3 V䔃IӌYX2sNc*qi!n@W; yoO+b DȾ<'DR&t'Dbląiٕ8XI`Z'S_O0’W63u]sFWXrr?\U.[TĘD)@R8HLktMYcjJpu\,qa2˲*"tW ]&nB71 $dv_u~YU#GrtI(=ywrRtڨ[#lD5^7oBeQ`+MZlx=qmJб> ,KICsNvVI#'okWyV VENX![6Hwf)%#{(@ԙ=$yir֪?R!m 3Ouq ڥc㱝 %-OC5-%ػ\r8Oua|.RڱFgՓJeG4GvCMr%y"8ds,%OvMV@@[F0aǞR'<7I%N)L])H#4#΁A[Hn>{JO+Q8L=| ů'ETESJWdҺ..2T!V1oԂ6=y0ih{ZS.(Ӧ g~X$ (AQlTSoNuo䅱p̐|7&VGlbUPJXSZXɐ,UM[J--p]6p5tn 2+CNعVS cuN]nx{, >Ք_iLkuquV!',1a%Vn*'YYY#f̚&w~w 6֟/m%@QA* RY(0pQiH=M7VVAE%Y |1JI4z=s,9!C{Pˁ{]R°_ OAQy2Ph(%YHDc&Yrwru@哖\PE3c/K2sˋr|xg56֜^Emɻ!JaZ~,[?qΚzޭ+Xknn!thIyS ɽ7vА+o[9'I#97c8h_{DYӹb0sM[JEmIeB*`s-C }edj )˷wiUJSkn"Ev-cʌ e+*7 5@daC[G3m[R6Kv-ݣHQBj2AYtja X.gUTBt] &e%UM\T!cBYxY @":Ot`Z*5[ők)]yyO_v=ʡmK #;~(0*УCmК!eH/v1>9186;GCV2Tr,&uɼo*du,SF޽דF{l aUc q=T:;/Ȧx1D6H[C[syx6^%ݭlu+!V"i/SPvهjͬh ,)bYxLjuzM\k.?qjSJ,X{1Ssġ~l[Caӎ5:(f-M7 y \W7i~dkw5HY4ɚś`m;'K7tȦqDC?x]幢<qMfe:*|D?tFXF# > *3dXTf,;1]9R/ޱ)څC\SO|%,%@f½<7k?<Ub`wmZU${+{קGoDЪA+*mOhe ./ԝ)џ +P%e-(ݤr?tdEscs@J˔/wߓrBR< Rg }eu[0tdA)1^3S| S ',0wϗ|O~y>GeiOG(lK^|Tu/K^n5j8JՐ\9Z^¢ˋ8V^¦6xGxSnPĝ9bUĤ$\)Qe"]!(2>1"{5|߱F9>(~ṭ\2kK}hkWM%ff[W<8GiEww9վ>}x7bz~=mDr6OXEJ 'lt5]?VNle8BZYHmH%\OŇw7nn2~:i_>:7~zqXil/4u#5q=<2RVx[`iUeD|R{}sA9Յb|:[]JX^VwITl*[=}#\ԇ!YC  ,]co@T͵ u]+h]Y_rNe{:+{W MLŶ;!ªv۔gxdu8g,adTR ܁FPO{T3Y[ʤbݗ\!_O?G͜OrHї,ٰJ/>l"vSj0}0ݧLŻ]1b^DŵQ-Mi PQ)26eJCAx%\y7Y'Z&imQMI>4燲8OkX~&jXdes5Tݮ=4D44FVT{DԭV  yx&myhM59}o9JկESqIn/@SC/QW(UYa9J==CS( 'f*wϫW=4]>*]4u #9f88{fIbLy FW 䇠kN+?8 [oH =>=VtpKp0PnW@my(y/?~_ߍĽ̝u}|:yB=ϲљ4fQෝP9![tTn?G@=7i$L$LˀWun0U?2-T7\Y#eVvv*Lׁӈ;a=PIIyqV8Wkxr~$Njn .ǝ{,SPάuLI fZ/;.EDÈ%5I딸) 肋`Mɒ7#cNfˎ;PqU*SMM.qӬwLps}Fjt4"%HZ1`Atf,z1 g 0.2n7)4܁rFehb0M)*RfEQvEn@cRhmQ!w3fm2L{.>,Jșbb2r[]M$zZrM {^:h̄R萕AJ#O27"|x4?8pih"0Lbe]%1]uQ[ h?`ك]CS7^\["_aZ[g2c.)\.&O*3PQ8-C~1^~,d0}7Oe lT,6Q˸-$*0V ABcFB-u!kAcH"S!Db,BC5&Jme+g-CV,; PUwn<%'H󛻩wlz?ˏ?XQ/2?iǗهN"D/?V܍, ,w$|V5Y>yoz~=_ tDI:we~Ƚ"Ǚd'A3Ҵt#K({栗$j'E~EEVU]KwuuA(N'Ar0l#4C-Fn9F- u0E >HRF_3+R"mt& <'>{)9P\8XrQ*qnc.×e; |Eu`QOׅyUsq Frt\RWg]"4uh1!4wB ڂ"S"DsIˊL /h~-XZH⍖7Z6hx72 Jhc޹ҁUYFmäqE}GR#B+_5zE"AE3<[ЦdNA};TIoK]9CS8nx(NXQqj]_{pVL ~`$Z  4ϥ4edÜ0P15{b7n<|ط=*k?BIn߮|h7̃3dom|,0E@25QD852GOIk薌OdBq2,!r.tJUUnxK lWt!T %dy=dAD0wOB-aؾ5p $P]bm+ 5b5a8\cuirf*gF}g- p2L+aVs+gεًwir*q]`rEd߱g2M| Ѱ1RX/}jW8Brϴ2 ST.ˡnE8Jg#Brvboe_uK{&Qѩ1i9Jʬf3mG`d$Gd0Gf(S==?ju>BL<Z%\ G jU!Hq8*}M$Ѩ=&V^B߾$Qӗ7:A,i%me4CeLcv'ZzaGbjϣ!HrGK{RM}P @")܇y11G Gj J dI&@+S|BHvj9PyWW% B@D'0z2sx0՜A/SiyJO)JJަB)^8<TQťT;Ea@AË+bK/P]ʞ>U^I! A h= r]d :1-iդb Z0"}hN yQ=z5ߞ5g&Ao̚cx~ifXIe`STql"TG4D)ʶJtGqC8_c*<";>/8 #Vk\c^&^NҰ'?gI`Ko( f&SI]JSt5Q\ʄ/\xAWnh$GV&*#tOc>yNA~Y+(9ݯ8ޤu;PܷkA{HGl<[s5{*-YLcT{ WL#*0}fМTpP!Qd͞ʌ\PsL(g<;ڵ%6E=)VŽJKAI^ ܉ɂ0]>1~Xo7߅XRbr_u-+WuÍ|Q+}A%vJzTNMԪ,DNmS L*A^Vq$`ZqCÄIAVerVr,Iqik*A0N95CPܓarɁƯ+)`Y F_1veig,pԦJZ,!\bYf%*][O^]rE&AU $}:; [Y0[e <%aj-L=zK$H\G*a9Q1c5q[12k(EaX2:X`hYN$x2o XZ:BJN>(aM%-W>) d, XPCZ$Գf^7lib;mW>-yD's~f)|GAc mRw9o:qccbd@KR߀dF"jә_Uxno88@,Fn灰Y ZU|Hk]] °e%L{تjʘ$ԗ)Ǟ4VuA&!'#;YX{Ic|GpgvÔs?s᣸uS;DES;S㬅b2J*|TI8zC6.] ;]2bb YiW;V 8>%%f+fΩً搢Lӓ[aOmMc*ڹ%ŭչL[kzޭP uF68}hrYEj/ /^QHR4DqQ6fRPJ1QŋJ3Z% U=Y TRG|Ļr=.ʘ5&<  z+S[5L8v<ӉmzJM'c5YfO5_zo݊t`]Z3`YXZj{+j; ʼ8 Ե ]Y1j4z8-8Ok c6VȚQI2̹11{ƽx.AĐؼ'}f)"V~H Nu/ zDDXg$^ )R9k|Q *GĂKX\GCc0A=SW`h~O)?'7"sYMc1XR@``67^)Rf.iizc)YO t0/K^UD>Q,6OA8E>c8\O:.Qv}!<Nl 6O+#.픙Ko?U1n8ƴWd_BWR @Ǭ$œj ڶWx`oM"/>tigBrp/nQ !›d#LCa2 \%ƥsC܉u*Ɗԙ4zAS6MӼo7f4X 'FSY@3i]5d[3> //qNB<?ڝ$ؚw>Np\ȿi\l8!E}ș k^4u,mHlBm]}YHXmO%0 nxEp|v&(T"`㪆[~*-6F@`3 Qn0k ~o\fwsR% \!T6o|XUʛr 6ILe+[ .lK򄍇4 -|Q;sDN0z;88煀+;6 "&Q} m, ,oٽ/XONݒAh^Bu@qy1F19Au4?6G)˼+?ͫ܅8_ˏ?/noFCTOz=!tK {J{8S=n8=߇ מ %{܅vEH @υE 8ݬTWgXwL~WjvD/^Tg"r^*֪SHhH]'xl{FD^^<˄}3!iϾgv7}ĄwϴEfF_l=%&ahٛA|zHq4o:wdY_z4{݋W9lXVѪ:p/q7髗 DO?Ez|oX0>}[J}z^AW?юo$Bco|gx&6 W!XӅFz?=K͋D5?G_4SchgOb0Ӽ3M#xSo'0y񊌴tY00.q_ˣ" )?!}?2s:=x{`#^|!B9 o1a2bQԇ mas6pPAV;_zwp&Y*>hR:7<<'SnFN zN3\hfEZL(r!zv]O@bvr]j &[(]|lX[`smߟtz)C@_O"fy;Nr VτiĖ@ ׵SĔ"|zYd '4’M\S ϝhL ~Hyaέa/َ Ojfjl]U_]W5/{4'ƞG3jcpKTgx)Y+Ǧ*Bw*PS9ֵ3p3ʑZf8ZVƳG23 e`,XFHJOIR>VR> 9G },дvnnڂT/0YvnL8)Ia}]p׏W{$J욬i$:@ukvZAmyV.na\F޽svyeps{_>~UÛe=G%C3m7n6dvHG~Ӷ (m94_ Z;f \"~jl}i9d2z'%I;yjPy&5P9 CO{ kP|Ge{ߏ䅝!%5`Rh\Rznm2 !U͸mhPP˦/Рh.bx;U_T n.\%FU}]ҋMQD@' $/-RsF„j" ;f VŘzƎb`"$cA?4D득u@G'ReWDІS=|kz$w abu[IFH܆-.;#LPUs֤bQ&yŠI`$δ*wqYssaЂ2qBb;30u6w0s q۔"rawS}I~vh [$YW}iEZ6[I7saZ1OU秫7>,NIS(24㤹#6J{?oD>^/g6'QgMR=<,e5V[rf!Vih~LH&͏iɞLGS(M~4Htm.) uQĨFRLh@ c 3BKJS "ڠ` aP|'hy pP53(<듄)Az$'\,s]d!3ۃSᖴ[p,pR:ZkQz2+{u,HI4Cu+)[^z.cSTjf8>F 3-3aPᢌbQ6|PZVRyw|QriF5ЕD.y\xZ&Lxl~ K.aƏyZbww'#VrE v'He\5ge.g/9["5pz2[K]I5{P $F眚 [llמʵqapaס6TbxUYrE8oJbk`;kA^#TW{ jij˃FǦ450шb՘:^v,ݏ eZ|{Ʋ$Ɛ^B I/Mi/~ E͓s~i#+ҕW.W8\sY5|,.O}OZɿm2 #R5t>)X}rOMgI>I/./\֞ٝ=`M 1q^W/~ߛ ܌ rJT\dNΧϟLȋE!9Cvp?st=?cƠJvTz,bK3z]dp# 2,Zunm|~)n2QÝ?q??$bj %P]w\ߎ]Ƚ ?ܫqC q}JPrURU9ٗ w%дI}GP&{DèHZc*nU?dT&;P[ϼ5i=3Y>{GC`=9?Kيo" WA TrOTފ~y%|M K׹wn<'OYrGD_y(a8(ssBl"' G& VcEy(sGH.PʼnUbfmGEB#b%BR쎡-]f:Iwiʮ-D?*N8'BE )_cBrqqFK -ԷVh'T"ٜVkd+<rjT&Ra^}?t'xpv`yΰmN9mOeUhfޓ#GãuC-М6Dх(D)P!pt/ 4M G$Tˆ8hTᅫ$/k7~;X}4 05~V%[g {&^T1b~rω.sMŦߝ`${JZ4LgE_IeWĨM@\-|CkڈP9`;"W ?h2%}9wTkd*#(,Q)gggyKtJ^'!d}97oZE^Zc­I=O &O9+:YQX83 f.;EĒ-ʜFXffgt}P3Z\Dײݻ{mn^Nw[?Og37-ĸmJdҟ@ %bhwO,#֢Lt4;VdthQ${SJ1W49VL06j4 c >[SaaC5(nIh}ka+=wF-zkh~?tI;[{iҟQo>PBJxh'ﰈh1ìÄ5k$9Ht(N̥CY5gH 1pJ˷sa;ą8M:k,fj$D!ӈXe14vD* i^;/`u9|0@=zD^^^yvrz : ;퓪i!<| ~v2]ç1玶عi8 ؛cЏ73uGĄd1;iDRzF*m1K&im$J#]z"Ŏaxȸtl#[; V8x+ġ8f IHLF,'570&0XZ@Q241 :XQKB`e6B(-VsԄPIvR[:C6"RgQSm EU1Pw %U* HnAj,5; {mTKTKRe^3rD0u`$sؐ:QDHcda6TsPf#PeB@VE쵪/߾USPT|h-40%9d P,";Jb͢X,0VP ぉQ9dM+[/H0xZA q.c l:; j^0. ]$)mB% cJi@c%"0aĵ!& <:wW$أ|UuN`(5[u{[=@å0&  @9v ;ei2|PŊc م~D#*h%vM}h z&+SS/fFB ! "a1RC((ځI+E$TBX0l$`(b&w] C!#&cj֕9jit1wKǯ~CA\ki\Vݧf%gj{6_!evv%8')ȉW-i$J"GMe(8Pdo݈đߋ?^pUbMnWy~Un)`Ln_ɟ;拇F ?CW[>]L{):{Xe~; RVȶr6oOn[IhZ;?&t dMi+j1tas!{1=rX\?D,&Q ]90mfe̫5bUO0& %rIRS{#;3Øf#g:h*ElJUK]9]kP DnM C`pRЉ-*]?Ve0x3ҙ Q؜)E6S EFEpQӐ *l3\N..:D5@GaaY၅LȇU b+~ًEtB0 " @4#'Y]9W`^22 Z %jɐ+4#[\d.HԸF@Jx &Cq_#ӯK8VTM?ZR 8zc/\e45~\QlXj-Vg dM"=?"񼇈gUQi{{Ep͆f\>|~x{Ϯ:LvenHՀȾNr);B}`19/v좏'G!tb,V;lU6+/TvͤC$793$3pX7Gw[nfcF %2s \Х }& dg4_0,sfx'HnBn朷Nsw0nyEvѫ3~aO|yScE lsyMFRe3o ,ϔ9 :y|LnH}rs)l%2Ms(sy@8cI9ƜӝwBc+PTr<*T!f.抵a^l1_lɄM-S%UeW`./;߲6@S8">+Xf8Y>3yUU/zd݊prFC1idCjTfpM>bgmu֪X-SܢҭFa̿0V C\bF]V5keS-n1&O7Cwȭ$meMӺү^*.}g zGÄTMkh d ]'VKa;auBu0i4ִ%a"ٌY-館(`5AmQQ31fد)j]f)Vt aSG53.B6T[C7H۝rBfo=/?L . b#-sN\qe̮-'{Dm;FȆeqq{{u|+̑9cYWDZQ 4$Oa6Q=nu_ TzQ}~PCpMڠJCV]tHϱi|a塭{1FDnn٬ۋ\],*ǩ ^.iKu}'FFآwMއ۬š!dݞj'-m%rS},z,+^&u$4_0== hfPʍ-ڙѨ5x܁ʾ=séy4I+u'!5m_fN;LqJ* 8/Pph㗤 p;|8kS)mN_N4F#h/Q38lCTЧ\3VYV,A$/wDH KP>DLr-Ę,)^"&_m[o%/n!5){RʒczRk,} I ''s^5Pzx-2YGXK2j2=JlA1 ףt1%e0Ggා^N558B~DKQǎiEg,`_nx,6<IXsOW4Q E^G.#iʭz'$<'\btDHD6vӜ;n8uMkD[9+>.mu4zahjBCic$ݐѫ{DzvN+ bhjʦio/UԔƑc󱷵b[ƪ ND{] IX%^ dȑs^sQ;{ < :qH% }4u+Ho=d"v͔ +KϤ`~W 0Hg6 C/|km߮ 5͔Zق cVՇn-^*tAj#ՎFt[>]L{)܉5Z}M5e+njj4?H`y1Qn\\t]D(!%/ܪkJ9sK'#Kp߇VK< -גB׶ ^F3د FVk~1!k$$(>O1⥋ ]U7䡐5@e%>,wa T頦T#d2ZsP_SRi#7徕5}bevd^IjȹHm@z/S7?VMP3.ZIR~%Z͐_ͶrTxZPF-IQ/O/A^:0a뢞8ubx2"QP G`]*5)UdF2P=>? 0R喍%:L_%hJ.6~g{)A` I1-(AR6]35|C+P`3'ZbRe3o ,ϔ9 :OL=Uwym'`AyX'MqI \8oleGZ4mZ;3q |U_M>.hL16[-灬Z#C4{wya 00P esd2&7*&~Z[Z` Lپ=FFbryjznu} <ᅯCa4u$8yr0F9od( HUZ2q2h߷)~׹!wV"G!@ dD/Zq^+E':K7^0vK[ =dII=@`ۏG~J|ZG/6%/6c/ Z*=&I:S) 4,>m+hk$]<|# !y盎d} +?]q׼ǥcv~L"I֜JjmcV1JB24s4ӹ(rPڬ۝rڷ;N]9^ޑI2LIT~ *P(YrG³9ghK.6g7kgh |^]($ŮZF+"BFdjAPD],4m9_j 9O)VZ[GPj^ZVyfQ> m"&l(ɚgtLw˿-ek.skk ~zsHNuy+V}.|o#eMj'Du{6@Ef3uk]j:!/|SruLº e}FQez8~9Ѻu1B^m5 ^5I-kmFE˞x`3`33ٗlm+dGֵ-$Y"Y.Ve%tSL ǴxӅx;ҙU!N;먳 I89&:™ {[襖uQgU:'gqꬣ:n&!8U0ZƭJ E2I+u}%7OϷ=wt 弽yHEEE*J-qw;pYRt?_xܿf=L ((X.m6DEc$d k^d88L*J7 W3q=؊) zU&@&H9U`(JYO׽^d?N(ާc/sZf2_i8,1NE+\˜PSF\o*nUr* ʹi%!E߆2 %U~}ORGp^7@䱍0c^(!225Ͼ /AV1WܯTUW3/9zQ?F#̮&̳ rMmM1iIDbpl0FGy/<2B-^&Bx?H8v[SGJKyH F&z,ݔչyWmgϗ /43-};BƧSNcsu 4'OwA^fMF8i be9Y:=]'Huiw baMTHbGѰU*t{(51mEh֊qq_Mnrh[ժ$e_ITaZw>5r59kL8١Ͳp׹'͜^߸g?*u ;˜ @UDg$WK+THȊ"Tp<"=8I,  ?1E.H=sx"+j9KPKw@zE8N5/ݕQwҬ}zX鼈l2!iu5Y =⏿}5[RA`#.b^=HGbǔ5Ⱥ8ZШ]heT(\dYHpc\,=}*x%m&48OtM fxw7%.A m􉢪mk?D;:LN$.M?!')Җp~[E!mwO?jX/<\C+rW%v]|!k Q\G|c3?Mdf)+IaLxU 8n(ʨYt#ZQ[ǔXܹu@hΩΔ:vÿB:[ &Y qA\{=x2Uuss 'J%@PCsYd/#:W(δ[!jMG5~3)h }u)H*}Rp,klCal0 /'xD:FqhK ltZ@ӹxPEMz] v]bh-k0qX<#iFJ{ c˰K$9z~vGk| #ɋ&FNy{gj0dJMidZǻY7Cδ~U6}׃J|֣ {;A;򧶣WmQM]N.~ ?ѳ'JSWStmzjHěg<Šʴ-]I"x{n#`X]b]i&'> O RLN ʏi F̫ӓjVy8H}zY7} }o骷a"F'{@@ ZྸdTA̽ӓQY6}mP~}{_]}H/-"3n2:SD!#tqL03ZYbH`e@HKjfmLDr㒘 cDg<U3C<=jsokj>)^DU6@D',BiĴ06^_j0M$SR;Ie 1N|$BsHXH*RɐNmLhԚɚAOkJWBk0LXM Yʱ>Ϭϛ'gfĬ֝9aw渆=޿]}zxg2WY ?o?Y-oekDln-@ rVrmv8Cb Uvs2`FnLzvj:ҬܗqJ Uiaԉ VFL@XRWũw2t+]%*ծwMnYd\q7#a-X=۹xy6Dh?3[aoW \u0[cm737YG1Ļ:Q qiVpև 4= AX la>=DU;b2V4C}$\Fzb4N.oQoBT(vKN%?/j+ܦn imlmy ^X+j!C¼VR4V_TV9Œ:ђ4ڋpG޿L7ǖ6Ƕ *DO=϶~V=7ͫ&>{{kϙ-j ÁbcYu`{0*=D Di`h2sntN`*6$NX9H|˩cVV$R&˘dEaE$B y LF2 8RZ#A57!` 1/cغa;$IXl9!D!qb$юD8 `jV LFi7k^r!=yH Z^%Eb2N##J$b9ɥڒFq'l{pP'1֣$-8AkR$X8 z 35eBC͍1(PR+N5 31y@S Fh+#3YaN3/4eݷ~Rm,eͺlMS&c۱=kL,BĬJ&0:US^FcFIJ8YR+nu4RًDSěDQF,q*ҚaamL:!qZpp[{a{HE+-~e:-}ANc9>6W >;CAQMrYyْĶm1(o$PT221O&c,AB&X:C-FH H+-ÈK_qf*"QV8[·f\ '3{aW{Pң+18@tdBXQc֬PiK?(;a%J;#,rsej.Cu0v?\]~4c ]wWŰ,!f;Ő_O4 9pU1Ўo$։k $$BaL+^EoYyb2Q fڸ%V2LDŒ8PbF23],V<0 nTGw<7Gm:t3=Q؍sΟ)\pOס>.ZxGRq^\X*x{??8AGDz}Kd~z~x-hgx <αy8ϧ6%J̔H;; P𣽚]ǜF90I$#ENٽ nQU *QO` g,qiԛ1YmJ frvKUNS @:^iecWy/0xI@9C>q='\aWpY[h3էnb6B{߮?'Q:ð^o S־$΁K@mDZړӭg J4π XkJ V!BMS|' EЖ2Ank#B)3 *T6D)u5{%9OJKbdͱUi" 6 sPpӍ/FU 0HFQ \%9#(:W+:*.-Dzx@ ˘_rU͞h-QFDwߢH RAz]3\yFxQJ8 kń/sPHєVeoPgƙđw$;K+Y5satI(=YaA/O(p Jw$fw*jAOǓVC1lw:9p<)bpjPጌ 92kpZ,9{ !nV0]Kk7XEhyIet=cG3kPEp> P-|;]v', 4pٙmw0E|vR/0Cƽakt!Fwg{5 Ee  + O|֌tسSY F0,X crijR@KRWFg Um9~#;n cCA>pٛ8عa;ўFL3fю8V 6XZ)\1Iq?&?ì(vgbߡ) mAe߷*{m'0*kܦZ.Q<<=t7Ck8rFQRq4\Rzf<|\Hm2%Nt/EKdn&P`.$A %mBZYJmH8g#Q.g9DE9dx vpx |}yAaWM}tӹ¶ƥ4&vm┴25><ű$ƥ*Nč5sS8kKcwHk{T ݣ^GvZjxHRyiUإ0j7|m}ȗSh[!Q: uo:_Ul!Z Şwx `TqNM*;En"W[[ܙϖf$`y"c3JIUdFi9|S ITc/whaL̞#,W YGT) :XBJY{E_ȼBko4k0MZ$smy2y%@/H@="uD jbͶ tP*N3;/NԌB2h9ϘC!otU@c٥5 |ʰ+Oܐ/ۺ:jcYI7+G8W4@*'-tEBxK_$IY3ǑGB#sV5kp&G*U?>JkBVWhc~Bh^W_4@~j$iφڲvಊm1%%>eΔJ<@-OV!/$oJja݌ܶ)aJ'߶-Ѿ;'50ƘϮA.4Z+3op!Lz'ߊBLD5P Vn4KcbGS*31)qG$1LOJ2q)eR@ k'ހ,`ܴBzI~)oؒ6^plyvdNn'UɍIК9QBƻO@-ny0vzlUEiI4RPb\M-xY ~ *:'eh9t&!ILm b2wv.ik.zājc_YBvrn2̾*}tpm Am F>bd=(wM[ :AׅwOµPݩ~{:+v5.{o4?U~U`f3 tBAfS *8bsMIN.~ AϡW_0_VSsHݙΧw,,5)^{*1L!> [Hy*TkJT}]8 LȎTz?+.vzz_S/Wf.LliHj+Yjhg5YqX*.:S;.a)j>5..| .)Z> 0L'ウ%O.m|*p"É%9ZX3 oz=ۻY{8>l<^iAL=.adRS"b،qOBt/3,*>+CE_~@SLX$C.RQ{B+#1$~%5 $H>}e/>uqp{Z̞e1͕4L2kL@(!_CO>V} sK.G7 (J NIAbP:jS -%Cّx;̞G!dZ`LwA (RYqi02Ϙ"Jyy 42o)>'1BS&i n̙D9גhn.*z{CPC @mxRLj\0Hpv~ހirA) lH35ŕm C2.5FF6a,D,MH?tNS͌ 37u4SI\u9pRI"[%ssTd/emD: :n _o4Lvs\'k;-d=ۉ<~*W,KқHI /Ʒ_\DEM/o޷Eu|ܾL6FJ#( 1\wܚDD+)-υx_\>F(9 Sw*'QFdl(5ٛ|oMA=b˚AfEhA-giCslӎvF*PI<DѺ`сLSrt0%B6h/lTjNAJXtM<>\ /ȩ0]W{qNˀWӎZ)'}5} _ (>A$_,^_/Fɜ/?+R%|tĒ [Vbe V:HƏ:gEk1*Q39JKCP<ܢW`SЉ/R7.S|eSU -+EނgD D :Q0!dahi5јS7Vq ᆉH"|J7|qX 5&V4oicnA#.,&#j%d)hu(}uڠJw20AIyNf2u2wHuRZJ}><8͸\9 Ά g'As02ȹt( h6>f]X5(DݶAAdG&$x(d1wTI%c!Fd`PG3ՊhS~ۦVVWk*"t׹kH"[wj F/{.WȑU5i$ELC+k@LQ'gge eewR¥PoɰF+oe`?wU3z~bgkxMItWzk䵒ߜ=yZJVo: YU`Z ե]U@ݵ@k+td%^@ `k,!u-`zsAt-X !V.sϦgx=E-ȸpx3~yf*f};d Dف:QخB 4( %Y>LwQa|O_jL;ŪrXkҞ܅Yu]_rKn5g`wP^\д$1F# o:,u'RwHvtnmݴyhQR1&rlճ{g*"K4D"/fN|>QEƧEj- %4{z|Sq> h /F/10;nj.}O'Aj앲P8 3E2XE0XM4K%m৙nMX9_3r"ZKJzIV3jN9h]#OImk/ݚѝe #>kO^,0URTpxa>͓|EIH#nN6 `Kph=O3a i>AU9ǿєZiҒ@~FQ>yqoǀK0Ԑ#օ/R>D 2@r*%nSQM]yYx0, s\u38i\fk୘m=EIXjBµrO5h='F^ kEQ.3Zl;YK1oַOʩy}^=l$c8)a YQ,5 bt 3O,=9ר2+ASكŴ ֐aǝ@(q8j.IZ.h Eds= 1 9)2;[ CX$z5|RF.~M/Rsm_:D+jh@3բ*Oɚ,[Uz =K])ͦz\N2kwP|P~B,TD3WUR%H"ܼ(=Bh7ayK/3")RHʗ,<e_ TYR]EA%Tymb2{COϺJse9ILy)1 LMx]zof"~sfD+#Véf32J4QtycwV* - W`@̫|| T6'0ou9}ɢo<DB;s:YRҷZ*Z O8*6 Pɽ'K[`*y<% r N= ͐8C5E T r+0.`Õ8\DYn~*n R 31 ,E,ew9A"cz+!yU4iǤ:'WbYŘrl_^ zis)1X vSn LѠĪ:tW%)fts1 >J@@10̥"yHb&<-=`DI6tVaSk#ugZ*nIYu$L"ry%ۚgYę\C;$E D 9^9+Xld::!S$S%.YrhTiRF˧J<(V՗;PvOQyr6k.p"PŎQp/4a .WChteo?UGZįWEҕU2M%jU ƪAB`핧eG_؊O4NBLw1W(EU\X#pu!n1 :d>`.I5ݖMHѾ8v#A7h\U&z-u'av|-'A.I ~}S`ۇ5;ÏYM3w0F#B69%Vs@BC$0U-6-|A6z6~WbUB9rUR<*a#Jx x $`@0( ]jP !V;*߄sPuAo.[RPmIF_NKi='I8wߥ) y<Ցߟ}Qj_ǧRk:̍o6rN-@IRH͝[yr]M|ѰtJ7tҊs@G~+J<ac%NC(.f_S*HVдzkćQ&2v';l8.Ƈ[oW9BY2ߞogZy^)~^/mZNS~;-IA'/>#E7 `v6}\7/D*}C$`|Nv3L ]cz q%sY!G =W}/תnj4ސ ~B6uP0_d1Z# G 2F Vم=$$k1f ChTȱGlrX.&BB;PHM×xf9{uL%eiu+![()U/_/^;Dxm/.イۤIy2Hme%fun|!!ҵuf!]Njm=p]+pѠfSͼ#>kYV|88czGv=Gײ9}LZNkҭz]? L.|zpVQi&R8PirYA,HY»2mxn1rA|ҀCgoGI /3 ۡB2E1a lxO`ݘ^ӞKp{c KeWc`MNa#s =Σ'pvhwQQa-{kDGF Ɲ4Op0] {E(Z:JKfz |sp%0øcߜ}SqR!o^^*sh6ATA~?hNZ~ z$ f.= ԛCbwy^eXɈ D?$u,VOH0k XJ. bBS&oP]$E.U1e"iUG5T]i[ bU]FQTm Y}f:Вf"-qNs?ͱ "[P*"oJtjB+EN\/pT &ԕ8xCyW ΂V)2fb?)d ªG!*)V)p.}pgc^ O"ͮ N:].?ogj1~FP$wiJ,?,^. ͒yY3("F?C *$~viė]yqBqlM|Z H uyt@7Bұ} - [FWDy_oGgEeA[WqjXkyrvuT͚pI9P}WME=hXYf7Z[KW PU#NX8Rߎ`\. eƷk^pjƒ9p]+@ S\G%xrvo<"ļq0{T;Ҿzp+#7 ƍg3WW_ŋջX F"m Dǿ]ĬPE4_r3˧O9˅pP7y/x.coM Ͽ'cg–0w.""C$[ώ_P?_Lt6_ǧǪKXCoOS*P.[3_mz?7^ӬLR {X='V5p DbZx#r }6 A_13-e^~)KwKS ofz#aDyIJүѤ +۩P:O2侉&M4o}S6W#9ViIvvsQGwH#+G9 kF8Ys|?f 7R'm_t=P(  \ħY}}60D(9Y|~.>M?6Լei!n+)V^tDV:JriӰŌ]8 \<@w&h*YVڠ%P -Sť 8CoSK&y&W *U f1凞E@l>!B"/ra͉9#AJn2DEp9;Fl> Z`*!xUńRGXURЃ`J]R1Nd ]J' WYU$UF%'6+rā=e׫^nxqĕH8 |O;Bu;ôoRp8#gcՠ`%=^pHJ1Jb^ҧO.>}ISG .Յ_ .<8}>pVNQ1}Tr *n7ա E stY{ b m;fk6^_aɄXnp :`pʝKyP0 @A99>ܰX;&F,>)u`D:xp1h i4[.ZS[ܷШ}]qXi#_imx`/θ/Xy dwӲu?Z@G"gڒr-.8$“[%A!ά*vh):M~؄^+y4R^.֟;Pqڙ/43<86 etgK8V JΠˎ0Ucܻ =;Tzs]uѤ::;PKH»cVRoPŇ[4ru˟p-}w)oS}B9|aHtBp_0.Ux5*|zp^VIPCJgQ@6$eG$\}ݷ%R@Fzx̘CR_(Ԗ#EPYY*&_ih <*e[7}?uW/5F lue;t)AI@aCp(%D@+ u!_@DJ]U:8S)5bo&N`%+2;}%)r%èLb"K1Amk9v־\;/-_1~Uߵ+7V֭'(RKdqvKZ?3hKt:r}uMè6Wpf! ІM-Ŧgd2сlbs8Οߕjg@„nsҶy+2B0,X=!t$uA4-g#JTi%$ XQ&5w$xH꼣xvH㱾ڔ=ƖXdv$.M{Ϟ9P<lm_t3V&PI(^ٟc!yzAi)$M`Tb)V灹pZi8VkE#KF>uUQʅ'X9!kL%N5U'ok7ȲQf}FV8qlݵnEǏ). Ji $rV.$۞! )I r::B!Ve̍k֗JË/scR]"S,P* @9g)$D]m*+]UG]M_?UA $rGѝr3 F=r\x1qtyQ owƫEEҝj/.yC,Akg+cBk8wPiY1Ll50Q*-A8(DɵNk¡5\^+զd>psۅ$<\GA;3U<,&+\us"A"]|rm;o&w&%@QaqN"{dk"?gyѯBmdÿU7:[ pDBn hm[uHJzH%>s[ >}O)3 7eVQܯRE :m8b7htǰ@JjX$z _b8|ټc 0Ak:Hu7o v1"LF*u8*0.,c֌][Z#)<x74С>ńL!?5Mrl2ҿ&ZeI_Q-< 8xH)#!U6~|2dpՑrž W[ì:.2ԉİyn8 %1$# [)ݭZG'P|&чhb' wx_ <>h 3+9*mlwu jDPwft,54Yi[Hb@pq]]twZ7S%Ru>ۗΆ\C 1hE91 B,aQ#D9tWi5_trpNOgf2[S.=l}s#nYG+93xn棡~t<5Td^M3n1&9w*s `D#4 ! |$A:b<9@!@/|W}c߃ޕ)[oڞA޲d욙?, Ewjh0-/KϦ r* [(!sἈmL)(@T MqsHe$llLj懷WX˓^yDkEŝKCLYZcL%,%gp+LI! a3pGI_+YŸT{?> (6}I}Q@/%|@}dRTX\uN5$0˅A8VL<%tZgEľ&`ﱲ6q:pEK  ē-jݳ:ED3l4,>Fy_ yvpw&K>=z#.9ԟ?roP3"P_le37am 9~a1CfK"Ii\J- WD` ڰʌ:!!(wBFC!VH:xi[鬨4I5$*Oo}!ӊg4u:(A@}څ[YCZ^? 3Me綝qU fd~ <o =Q2ʁ\=P H.Bp }7\(4:I3> !yޤ0?:4.܋z+w~oK+~Rh.7Q?rG 㔿*U z?\]]B?V#S5o/~`Vkz?"ߧ׷{!!N/?."yg[]Tl=K^Oa<LJQͳ5q$ad7Ň' =h^ !b'z2WReXo3Z_ڢ0fvflfUg:kYZVTp *p67.MEkd_S W8Tdd%4%+ep uò^6aE/:p s.5gKIKibk[D K a&LZo,ø cjˀp:]x>{QqYn!cUjhdP SUmAy bwHsKC5MZb\uh bRvc6օfvq f[ES " yi`M\ RH lVE8HrBw'T fTaLq@=Qʇ*V2toB[GZ6?{cDCyLBI,FsSk Z*8hTL>*A$!EŝZZ;bkVeqĖj.e0DbB0 PߗnXI "w7CmX -$ hMKG[6wH)/WT%;=@55@/.i@;xfS KgD!9Q57㹖?< |\V+aئg(*ǥZcvDŊ8S!LHp`b)cr,ɅM-g.JL[Ѡْ͕׌dl8JyFiRpVݶک v qdRU$aBx2wz9da"NT*+{9AP#ܘ iI:ccSAc iOҥ剙(,tMFEp2c0#1)dOsnbb]6mʵ/ .-dQ+8HXr1E*F( V$DJ\Ŕ)xEr+Xd$+,%GF8mNん/KGp}Z`x `Gb /+ )qF1(1B.!Fgb S!J2eHυCk)P#ƽ@R:Ow]Ѱo3z,n,K@Q ; #C, BF$%ib?<'}XFNF譽oR3ZO΂4Dqi,Ya+'oq9} V9 O 35Z '(guƀ 292fPmNVJZ${ ghuK4?{.Qy6s^[O jeR3o=5AEEiZl I:9"ӄ1Yk%o?Q"8t=+Ϋƒ: PԴC')i.-b?򈉦ḾdPIS#.UÞYQwXsZ2V7e\deDPr0W$EMc›Gu~%ē.LC{veV)TwD0GD5KʩEH@\!n-Ț ,*.I0 o9}FC߬5t= AHwWTs!*iq !.<C0(Pqmm)P\f&ܴsr'2(.F9tj_v 4 / Q!A,.>;:Eg-vAi29MTM2Qth~e%%u! d/`Sz#1f YϟÙvtS^'mRvvfEvSRĤxm(|? zQ䙪2-~cUWLWtv7&%uЗseؗH$=sУ~GGIch&1!Fqw(nlqLʆdqy @ACT@PWPO yR"%-W@` YkDq> r ޕml"C^S}.hZYnq93x#K*%9I )˴,YIT嶨m9g63D?ѷBA^c`֏au6tmS86ݩ8~=tP+3GSO|{N] ;^~%|._& VH# rixD@82NCL(țb~ -1ÿϛ i) 8UloUzC rYo`%ow]OɸJl-ƌRq}A1xheUU)uk6 ],P+3m2b)b+7#u`] WuhxA`@AGӒ ^$&s5)~Ò+ Ԥ.G,mId#4h_.s,ukK1tEs2>Xw*AwIx2;X恷 ]Хƞ@S%G`1UEĂs8ѡ!(ª zº@܂$<yAFaL0caňB"#DcE[W)[d2:t o\S}"=6w$!'po7n8Έ퇉]?%/oneM| jx|@"ֿz E:OE|끿w|WqBW|?N LgS^O}60O>ݺ:7w{AmFEa!z^vXIamJflA?&R(|"{<͠vjBa ೊ` jS1R <]8 JBc`%BߏBOaN B_$?}ČsILAbpIR FLm\%xSaz4+[4snҦ=,KPѥF5nQFPCF1\gXE 8'p,,UL؅qdi)ld&`Bn$/X聰;åCRP+8@ mbZ4kЬ3iڠxX*.'ԋE o$ mZE#[ ,>pjD x^xDImoug\Nc@bD*#~M2N'%ȽkA7vTOEDBnp@Yh;BP4ИB2Ӱc.5g$t@b, jl"K]JJG\ GcĬm,xY';똢 0s%q qb :,r 0\e2$Qٹ9O)ך-Kv?ܛYde-0_8X]m<'"f '5j/I)xC'K\eLG f VDI$ _xx[?$GOW Vv{laQI/ "Ē">k0'>jiJϨi7e)m RRZnI;QM?E; 'gJ(휢1b+݊7ʴ.Ѿ@I]j]RbJ'TblXy%Vj,cFIz[q;vފs&৕3%ڙ#;0r֍@sdMI7ucgfJI:*+3CM+亗 X kڨC1g)_}`y5BdӌĞ&=3V&"'V@ ZERg Mٞ`CFk7/08۷-Sq3 ÆƵou]D1%dSȗ2 =V?U5etdY 8nUjiNwHeVEZ!*T6f \_һ(C`Li,:dBTHUF1L1.ܴ'Bj^XQ>NRP2:5;έ8\݂[R֩ٶkSZ<`+Jŷ Y(au0]!V0D:)9A,eZ_XMʁj't˒RÙmYǦ]fbG^9Gn^/7]=(,-il|u#!¼ T*GϑH@pM4b.'zΧD3. D?n9$]oJbbotpUH,##"DDb~%Y=8YQ1`G}ʃbY1fw?yk΅lPh2k'wjPDoR|+.d) Yh\;23=k\X9>`jkQz"qB} ΀l5֞0X ?` d֠hL13ڐ b?ͳҚ7&{!鋱oD1nN1}5\ڋ>͏RI{+- %Im:8#wjKdEپo;>H]h\^-?? G7JE:_]£ s {6AxMHiD@|͉W0x4ejS}oZdJuBE /rB2h|k%Bb_غLCZMɃϪ jJDaYpC6b!7 |c$t8(81, +B(fbmicM &w;^#MZV1q#HT\G,0YHBUb{bpjB1sQ0XclD)EDCJ0a Le=¬%*RT 'FmCBJ̲?+`&%a&mP Q`NA%$\I E:[S<#oY5!Ъ]KB^ZUȫ2~*Κ>PHT\YYpHO)VIOe祇ŢkcՖO>\&a56FmeWN9F@ߙG2%KՖ\'[p-Da.}+j ~ .3*ty5$9rubnKST^kN¾q(u#f02u2?yS,}yM/Ym`@ B\ >/KbL7TAE~UBs%E[ϲ֩`nYJq N/~x2:mϣ(z ?H/G?/'/sx>HpÌT5'LNd/qrt7?'M\#Ru+F+3+պ ~GQ X g#2;М]m_VOnq7V"'Ìgo.N^{9Ѻ8#~;};xӇ ?&G& d4> 炇(hrmÙ|榏+7N} n>ɨ=y{tmʶ|~6^#=LXun~vM]@JWKō?ɕى< ïwt"Л~K'{#wz.sdzߚ d?rT믎_«I=W<e6A3kl\9z+ Ij|.s͢?HqT@w[{}Jo0?)ģuN'3 M'f/Po7{dxvvqߞz|1KYOmr⭻>^ -w/|>Rț7*0P'nדpFI/'+}}a}8| oūW~x4WIlio84;?0O> Ɲ2O͍.W/zY/#rMO`2i4HviT^d}j};|3Lg ۷̞_o2 yrwKVw_}a: 2JiRgF$IޗtnmPf朝|qɃI0JM.<Jˁ^L::|Uapo=nR\ 8g.en&E_GkA'9l>&!M])nAAƕ0p>KI߅} {\߷x>΃Kƅד;0OfY֚~4L`W`pW4֪Fɛ!e-S67&Rgʚ㸑_lI@CzX˚}؝òg^`H Iiƻ>&Y}Quwі%YJ|HdZc%'C3G䇗O,Zk+<řh2<'tmA Afv#꥛@ p(HdUvGCR6F(W(YJ'mf;I]C 4eKcx%S}*de2/Emi`;BיԎBsfU ޢZ0Q>>;Me!̧y*ݻwc,jU~FiXY'Yg@J0[c;дVDDH$SmA(f ~NE70398q>YZu1s<>',G^K}PK"I=FC`n{_a7P׆+MiP+>}_ݳE-_6n?fnvz,郠:O?I緉,ˏi"v!X, T1"^rDp#~,׋Vx,6!*;5LN"fIeG'*#Wm2ޚi B8;ׂfT6m=P;f4Y拾P;7K._}ʠ=lE\NxF3O`7uc㺇+# ]q,mA=Zo^؆44t^#cR7Tr737kC+oĜK16ћ%.!8ؑ}Y-awFlM.ߓBVJZl ~qd,H O"3+I\]CPɫ@/>Z\]:[~]h_?(NZ&|rg/O|/傛FcVkf'twBgڑ8$?Zǫw޺rw:OtiwqØx@oݕg_ye?OϛT8&#nxO~?p ͽF}spUI\kwtH9U0@_O-991k]_}4H(I~`m=Ar,f ѓH͘{3sɭ=;Gۇ#ҳ𗿄y ȪI3[6n-b<4봹Cx6>D9 L2 5zzr.'c:m8 #]=hjk6z)GWdR"[6*iUop\ ɤ cQem%&^κsea6*Ϥf*)fvO>3R2H'p/3 Y,3@ߺt4d`me뙵O+IֱTR@Y\rуJju1Ty"'jc)#x+*F`Ʃ3Tz{&6s: ir F)JqxcH { ֻ AE|UQH[v!-5rȗŸhR[+4r1Ѵ``:32[jLG\z1r_pC2V/Wڵ`% n.o+Oaß5(=ܞ:s*.j}jlJFD&!z(*Aq e.gy.%l(W{ogr._Uן|~7]/zx\ Ix~.7m]۳ǝ/.B:fOtQlBs2&Ϗ5L/ VŚ}bɍ# x:S4x*6i+uU4)g!y2Ό{KK!hG6DDZa l*z+^3}n D@322Fr&)o}Lsž։d+Ӿ;3 룹i5d+2!]׼_aYZOWpym.ܒ9)緯H/d} 0lf:{7BlmȲ׊' a+VT3 YE%h nhEdB2FE#+p2!F'p(!b,bU+{"虔t.O;U R$,A;)d3VIn7lcٝf[J$MmWeÿ*U :E!(LewH'2$rT!c)pЫU=`臞fegk2]ٸb7#2*\I@ rMp[ 9LdI E$HeʞPC}Lsβ' GI-ٖ"9gI+a |$fLr <@'ΰ3wA!&{T*o^ F&)d1I4WLh= L00qLcellBwT짪d?U%g?5#[@Fi mK@A8)߬ŏr$SS<./& ]T^)l| dwB+h#da#K )^HY | <.B<9I)S} qP eƾ;$pf)X̯LSb-#3{QP*wZr!%p3(pP>l!=l زQl"rE=bWhjeYQZޯxjO4WDFZGߏ(/=h$ ngPeݥȕzcI!SR4C.ߏcE4jsbPQet[ɹqwH#9?z ]k8ډI ČqXnH@ }ct{ R=t잚cᚌp;\[[䛥=6&#ւo/R@2-hj.C$\cB%*rA$EEXQS`ڵ>5Avf+]%(j2.b I\_MvjEZ㰵zQgvNBA|j}WTy4X024"(Li,4&ۺ8u55fT;8/r2=iW #]7c@qnJ7G@ae)g8"JdՍRini f8*Y29Ɂ3+LZ!R)jSC `&j$}5տ8H`!Ks.ćtRL[^ cJX{>vdn\dp1{ҕ$sL3X;Uuecl1e)(ejn[ٙI9;g++٬+>.خTm/xM.H(mjDԟ&e Y_>>'*Bļ5+w.~џMf\qDCWpq8"b?9RÙ!lXg&7`0`>+ϽSXgKIyִZIp܍yKO$Y-}O _#+ >^ظ?>cZqIX.n'<=-|TrJx?cA2U:lv* @!šK"]Ł*Nv υM4,Wg: 9 :%KDw!0SQH^cR>hbOّKTʥ(C<ЌݸMH2DX?7M*D5J~NLc4 4:jr|P㓛1}`,w&sVg`)aaCV1Ɓ1RbbZ3N= ,R3I)&h\cIqwᎣᮐ.'~8W.V)A%-LbF^jC `hSk5A-1; s֊P& 헛+ OJuHVBTNŠN:N'CWnX?ա0U8Ri *P(z`D1l0$|lǩ;%6ϢD(Iq˝FWc&cWb:ڑ>߻؉}NqxrHTw}:bZFH"q&h#3$cDJԥO]0ɟ 8 Q'p_/Ѥ~Hj6,u'|0u<~_oõȐ+5 9*X9%AiQa4e?|Drv ӹj5i:Il3u0`1_/6 dŘG_d~sx_)۷ݧ48pt}S.Rqz +{dN@=Nɟ| 0w,L~G_! U<՟0Js,̬~02+ Ow_%, J(X?8C!Rp{i}bfN0Wxߧ/ >L3y(,bӷ5Q8JrjP5*]gDxpa iqDZQj}Lm>}i)r$N&p' K~#fxLa˴ s;k/3Μp^?Fp _{2 { .U |5M\1ƨ!jKr\fMÂ3iqgH7؍{ BݙZ 0ܥb`eՏ,,~ji$gt3;o7MnpM{_ݱC_Mo+f?'7e~2{x |dz4zzwn= =$WuͭQ%~;[ւѮ9ԆoYɲOdu2頭b[b 'BJr%(K 5DXbAWQ + T+;_?]]Jk眒Vzi _2#sj0R[#B#Khkk]nN_bA@M)US*ULXaTZ+kG Tge0ΝrTH[Uh@f|X`lcҌ1;x>;#= ZbAx6-{qq+9wQ`;۫?=Z~ eJ˔(uXq03^qQP%i)|`{ 3sL)E" ›@I-0kRTm|`1a )DcLah!=k>V>SZ ѹT3o/l 9 j[$\RTj6vR s* V G()&0!r9 |J*-&*&;Chd8Rre au5Rq^:Y/:lzdA"hS^H}4orݭz\`z+jۿS3T"Y UyVݞ?v&EKUta(}._kШe!Â'* i6Хe@|wNyyrW[9SeMDwĬVӛ2J!C1!cތ'~=>X,[.~P/ӄ  glL&kXOq vq6jyBC,_ F`DinYt!E[[Mgʚȑ_aecwz=0k@$nK페 7 U,uU/L .eB9[~)C*B4QZ@S0b*Z&,THyD I-6 uk2`*#qV(:09Sψ ފpc QEKOWa/7uE1a;:p{Zl[9X' Cys#t,34%b|ߑ]6Iі )^7s:4ƺMR"R(Qĉ{b8#kW`LXe?Nsq-Bo \.otkx KwQ$R9I3>q/wuLHn=ltV6͋)m3q?Dʉf*i )JjWTXor O޼..gN(XF5YF\+ ha-!Ϋ$OnbƞV;Jc E`]nt XziJ\!0cA5l:BgPAu:X^R%%Swac auѡ6n)[GThH+eV&LG?nv2oa@k+Y:.uUiZ\jv`(Q !ګq^]D8_nHU瓅)C٭Oq4Iy-Sy>`T]R R-W 5֢CNWyDwwkth흖a!cv{Aw::3w;-QImo5Wv߾uN~0X  gWgr5tnާѦLҫ$L7*gWkr}+>3yha=^{vu&G:c8M] Pn1/\ s@k[uc}dQhyG!7vPc qkI{7&`;V7((G 9A]1Qd;Z.ĈJ4=^m&]D%tD\S͘,}Z\[D{字4 xȣ6=8"'5v mƷUǷE?D -fa\% w J~iƕY0jpYȸ(ttbtMFQQETqÕO;9&SpTB9Zobt{n2NQ-iki{4G[?MF*H!bFcW=+飕4 N4 ONW#Ni*_&6 kFQ,Q.$o!A}{=:Vsw٢w_o'xa?ypI(&^8h:1`lj(Mq{%\p4Mgq7 oͨ`ӿxN.h|p=/%s >P\^p2>0:RI =W x~^ 1甆p'YZy rLZbf n YgaQ'r@:,lrmHPKɐ8+B8sĽa3#,xO7;N2i@Nbg,3cCKKrehZ64=|53?v_fx]Q#aXXeì#=NzYIb_R, /{v"efs"rbH촇2X9D L}n_ ӹR2'= ([G\;jr,'^-&f}ø!Z&*Q$_[:&zTKQ4#laܐ ! skr \b>/ 注04R8rj@ Gl_oZѦUu1ܴ(D$rb4FN@ݠ_FV%NbbC'UBoT --# l]Cey+D"H ^[ׄwtr.ng=ݰ(َOu l:1U$Aárd $"+BZ_}DW[nHR]G5bwkQnn#,-Z^CX&*zRM:"~ѬSefDo6R+ϵj6 =2"jЫo 1glOt\5382)\Pfk5B [$ՙbYFTCeSLJ*slڃ:TBٝQy5yG;۟>DTQ[ṙ?5)*J5$ ܢ ־`U;f…I.p\ne#Em/eQk4%0Tx @]TwWVzPBTŗ|^tH>P)VJի0N@*Û#hՔpMU-(\ayR2DpP͕bK#Q~t.o",׷ހ)<̿Fw`y\/^,_}W}s|u5MHyO\ߚv6E-%[NDF2x·[?Gs/ܫ1AfC}Z* mƀ@z1[g_o.. r,2^*ig2θPE31Ŭ B8쓈ZϧvIQ%9ofl֜g27H %蜲,7`FX.!s$7xd>5FqXc4$GJ'P"#0RpNQja7a" kUSk;? K٬*4:WQud7UFQcz^h4Nh%Q$(X,'F*&2FIpACB(s28 GG)F3y|?#77bXcs1 $AwN`*q\ {4 [nSbM6.c0MvȪAvz Avd! IGO|lJ:/ৰY_E#G?ZrS TWU+D):&XI(nbE/T] +f]tnr-s_*JAC H-agcQܘ 1bT25͏ƾ&3yxs?_FSx7\hl@e 4lX稼߯7|+*آr1Η"RG3*NvfS'iEwӻ>¼?HKtk(Ӑs/s0y=̾$s; lNoﯧQ]<À*QZPoUI[[ Q?Σ[Ö-E˗gs)"KqHkK2nD'-Fd$J2*QI&Ms4^.Ne4 %fԒuI&k e`! L=+BQnଜ.YF-e_>Sfѵi|Qh}8yġÍ-7/Zz{mC[vж@*u쫔qަ6 g`#P WfѵdYh}miW՘u~|{_}ӧy0vt;}|S?>6}~8֙PGOtP;+XTzϊ6<`T>(cUE(29L'fy-Y 9ui?ߵp$9gTɰ^ɹta @zA: S5SZ,A&d13g6SXⰬ//-zϢ+x٢C^xmԧ[Qصg[Kʛ9Hkr'M훾#%2?"[dn!/6SnO<x~ݲ bw ,hr{Lޕ(wgs㕽W=.g&UQ62zhC6ݰFk?LEy N֍ÀPB-*Xk*Y%ރ3ZUxQ>xGC@J;˭g?Gl,?~\UGrO]u w#/$% 5DCcp(#(:c>SQn٢<;W_HU1TqoA7w rz Qo SŻ$BTwІ? oG![C;Dluc`7/4zۑ]ya!"MO5-۩czbbT1zxm &yKм<)=V>(z;i p҆NZ]w ш_E.}<כіoG>ߒWJ+ݟZN0(mf}d^2gIۘ8NلRgԵ.[IsQ:`׊y+x*dmt֭h]QD>Jdώܲj4 }cVDAMO'߭M2XH~G2INȭC$wJ @jh9ƱE{|<|ޑ5?,FnQ+- lIgXlDlU't$d(:|HTʧG]Q>+{#7S!8 *cn>ͧy3u Ǵՠw)6$wОbo)O¤nOi`#NsoS)g-iaFTbǗ)(H7=4CVHT -2R8߮.Bx|c)5`!< ePcqθ SJQל3,]FGauzHsO'1S~~; 䈢\(`\Y!CE3wr&J&]>@WAPKJf=qf_Rg)yHi֒/t6]2MNVeIedԗM_k/J"Oe`' M|I !. VJ2/mTx4UOrL4h} E;Uq  ,+hgV*^3@2&<.Hrݴ0,ĭapYR]r({\ĉT@%~+yaHx|_U_݃/|}&U_Ӈ|鉟Kz[y]S̍Ϳ_~n Ϗ:ۂ' MQ~_!iDO3w~g뾔+U1I\A2aJ܊ qVQn (OgਠHY>*z0TBs:{ TBxfpH"l;&9Z/ E Hв' 5ӳyMNq%GB-Gwt7%JsfVWvvrؿ/ Qc!ұ0!0RՎ)S\$v:Ffb:DľlI5v=f5I@A5j&Sk8Mw|t%OE9d(s\x0RoX/,͉ [ȑ2'' lZvw -rc˭"MX)~"WmVSbQ[bR6H@rM=")WDNsra^e{@*ް}Y,hr"vI*:ܘ1 yXeD fX'v7hI{śD~v\%J0噭K π82>xzL.qCz\pfXwHđ(:cPޯʬeo_=WG(E8B/) EOۗ:&/ x_VR8_lRCwE})GîK ]Q_ (*RUԣΰu ڻք䜈%Dpn UOJ*j]V ;E-\zv.iFVGhZ?z{֋+,9|v4{Wdh!RYFxn0 >'2WRpCυd257}]7HyykZQFc !Je_zHgimghMZ<;׼O vI7SvV2C24 35͋?"HsȄv6C%t<3 FZ-y@#ѢT:ձx;TTW(C<ͼ!RYJ R.#C>"TЪ^Je٘@̰/r2LzvIq h-9ChqJ)ϑo"cQ/s,|XY'JgHay 94P23Ny.%q&iy EBhk)Üb!c7,Q$י6 C!5>r{M@hicD퐻</wj]Rt5irWK.pR'%w#{/$Kqk/$[pVe*/=ήo?ys35Q/ v wTۗ{YwV\ 4!,/X0BܟVg^9"h*h k&%()t](j̣c<*&a>$8Uy(" B[E Zȁ( #DϺr(*4g2*d*$| Ҟ`ST3o 0ѡZ H>)K)0Aut% fe**-E+tkTj>&# eXhLoxͨYF° l=R7̃><,%"t)1T? OQav=U.bXdsC=Lw_jL{hOe)i3Cr60뒣Ɠ[E#;8vw~D9ۻ4r/ZEb&Dd$kV+$XP8G˅ M;lssȩ h!F ^^[mNJ@dTToHJ+zEaQ.dPޏIEyٸO lsu>Ujt}[^~ƓJwG?La^O3u?{?ίC/o/ BHZI^-z=ʦxePn!o+HgEt-xwe_Fٽb_l^^/WOT;>fj&Cqشc2=6FPTV[Kc]_-=sՇk}A#*RB紝}&Mc"S}AC`cҕw5}׉y >HFkep' ={N;mjWwDk]jK rkxtTF)X椧zrP o-͵&g'Ld9Fn£Z;c!+|{ϛAGN FC6zf>[w|(.מO4?s+4?mK&@8d+8ޕFr#"< RM`4Ovb]xaϮZ!KWNfQYRRU_q/ԥ=l |zQ y`3"3+ڞkBz4phPMIBZ7*i1Zvd6c2\R <aBd{0ד7" 5\N^Z$Cd~?rb'v2ʼ!h&"Ak,։T'?,9g9~Yy!7$Ӗ{2+xHtƀ3ڰ[mb[24y2 Γ＀QZS@mi\\%!xaw 'id˝M1(ƴ# 3sorc`ʽN'6=,P}`mC. jx́~xg/?4p7⻴K=i˛bP5[N}8of,󫯟?Wy5:a {7c\?Uz&pmiJRS0R"j-><$P: +%b"I tѧH,'1=w^"ڜӚͷy|Iu덝}oŹ8Ǭ4?>ν'-d@mdPJ3S }*Y@],a-QZHhvH,8]Z%Um0FgNgH%L!ZT@`=-wYH% m@WҼ;DQ$8r<{<{"&B|tc!vx,ֲ; 5pU5Gc.CRJ卌pYeFQGf rq*!L2ЯlI3bQo8C ޔdycu?ir~aٚ@D .}I&Nc]Ql M3T)q#mÞ O^ξr$O U%+"H 1txS%sp֗ķ5J*ږ6̌thɭ2}fM.l5qůA'Qw`&gw dƑ+\!hF,G*\MRc݄"$a|*QN2qjߡz*$β8",hV<|Ymu>@d}أHV׍x.DS\9D*gw|zPtFĥp  );_NA򮨧3HM߫dݤT]?n)2hWa8cn U}kCz$b7kAT(0)`\0KRCDu֭L3l4Y/akm;O%F?:^Yq\*Ry@@%BV8 ˩U{x͜}6y>3*GumMzRWz p@M&jb-xXlf6)QC掌Ϩ]v/V%1Mzw'hpp c-ͥU9Vߕ`QͬdENו+"z$ka@n bdb%gr]Ć5s C w+r|!ń[Юµ%mv>UEka vݩ#M-wa]Nߪ`FGU"Zs{Kŭ52J lõQ z{17cQo&*0v5U,V2"U-%Wz؛g}D "v+ynu7m,ެc1Z?Z HL,U{zA%ЅӋi#E`@zf^J򴘫:éBQS G(jkz-F晴"a=b G'EKNmP˅GBb+ƕN0=Ij\}(@~O],Ffx9Eiffb. sQ*Bj#}R󜅠5OjR$rc<ꠄѺ^­B5:\P~*L݆p :IHez w6l:x,)JQuM y w{ IWܶzpSUuH9̾B.Ś9Krp7}ko[(HD-$S_Ϭj,N!7U2+.LAUrk啍Ajbu#d` m5͘ǣSMJh,gFg$"!XO*'Z5j~\sxW [Ujd.7=bnG8!&MJh)D'qjv34V/bO+oUJ+¾Z K{k~VJu@_W^ɗЯ*9#4 Гɺ~ۚ0#ڸ:Ix>x ):%{LvPD$E%I)MPE KĎMVf=r[׿݅mu;NqhqgPr쓞Y4b|ϼx!ޏȺRdݽy~^Ϫ4DnLƆ336"9W#{טak`uj` bx1JQ0=8ض 9'@Axjkwx}[\HGǸv9W$lc{ $yb53GT ;~#0b)cP~ecE:N8eiX%eKb(vsiZD1:tXo2hG:rHӦ3"Y-jm;Y~eDNR?<~LۘBqxb5Cz hB7/vمEsv!V#Aʴx[T+a,i"K_rUx ~v!hvŅ6]Wv_5*S^"5^-.F)_( s!Ƅi\)o4FYJ'7nBX-h\j=wfvׁקf}ʧW4OTÞZo 2%Ɖ𥽲_ʹŞ_<$[,% ;>I3w>>Zx7uZ@T',g?Y!*1 CcdN] +f4au.]}s~btznd 6,}:i^D2*%y|ΓٷؘٷyݗwxMo^bfFp^%7EY`yZD"cơ2d:*-,wzɘjBd=F&A3NKd^IK (y.i]#$T4쩅V^L0eLo!&!XFh*s=cdCV -7Wwb=췪Ƙ5{VMt2`/*Lꗻ{OJ:*guJ<~XX8.-frCp 8?.x#Ehhc̐kքcvw_C"VM)BB'yw)BonV'&>mTK-MN'GD>,7ͳ|r QZvS܃),)؅9|2֋ڌ|B`FB\[ jDzKu ȸs6ALrGT;FSi彏 X,Y%/B5R7G=Q>{E+zܤz{XSW.B ]rp(#Uloô&'mݏ>Ckuvm.E]ւ+9\ik Yg6'2Уm",}GIW5(ze =;o1/U0 `C. 2ƗuMT 7GLWlz-+;=B2% QByn)Dgo U(li;O !h 'Q_7_bAԺw: z UdTk\q*k^SY H Y  DI)'RRX-,xJP:9/7B&WxAPجu~W楕.ln mt[S@ëfD.YFɔɔ(+531FfG{6Y*(38r%2H&FfTyjq-soJp>y24sBA%|沁dBdUMrۡwtnn2c,2ǒ/s,2ǒ/*RɀZ4z&&9K$%$IYl P)h&pPl׶Vvm?x(3JC/v/]]%>a%/N |px *飓&i!h~5b"Om&2"Wph%) q 8IYq`]i(:0Ϥ5SİRÒW mgr+ |WFf[&EvC@9:P;FGkF_!RInOBhO:8SNI7  [2͘[aF9t10&AM'} ȉq=\lY`o8K/YqȬT@=V .ʭ :ns!3w&nf %8U`'k^Z9xd(3kI$hݸaCK@?$H!D.Y VPKgd%W"э \qF[(7.P L'w(7pq2%y . "Ĕ=pQB,OTahŔvQrj\F8e#3瘂(}) `\O;IÝ{<щ@BqOB7eD pǑ 8ܭnp9νo,\'M0ypM7?޿h?.ÑԈr/d+P=պb+Q7bd+V㾝ktgC#Vr>4P"#I9]8Da-$=ϗmlU{)vBhhJWoh7h5kTlxCq{ywt 7p[upӨ-fᒎChh2p 1!.Ըm<ֽ ofp97Lj|r`+Kb LNvf;!7S+_dOy\[ ceMWxv|Lcc~jJ }ܗېoS{M4ǧn#"`~il%чtKv|{*2붅C[a6T齔@=]\SޖݠpӀ{)OqnRB#OwxTejy=4f#٘'*C-))]Eo; Թ< nЅR>/$D@Џq[9{3I:V0V#?!JhY/$2N#Rr 䄍L(w # bjZaehGY!S'!3/3.QTԱ)3p&F 2I;"UwEFv{#:zHB]8ᬙґLzk%6p4,84 1 *@Hna$zHk֫O} Z*+!zK!Fљ,6/ҙ KԴ8C0c98 n4+ּR8`Z 1S-:u_N!R䞭o`[ ν4MhcXX| aP,0bÀ G aLsLJ IR#oq+avmWhj0<[R|%g8ԍ:_Tft~ڛ/ >{ 7zf|W>vI e \A47o $ T7lG ^ޮ3&  '(g@Pl)jg4DrIFhIdu KjF-ŁG-@xx9\&;"3VY8$Do-0,CG]]\Bqqlu6C9iM MF%'ڦ:\fR@ÆP\O@AWwP%/xeoX7CZCW .(w1jq6 ]ٚx/}~;3_{ ɛpM\[$=&ykO]ϊH=~t(]efeklY G}.^==ح#BNh;Wn.k\52NU(U=:ޛRxPGp l\sPfsOoP8L!61Y!l TJf2BH*F8@I4ROQGZ-17FiFə4G# y=v|Emf6BjFKhOشpcco "LWi68ّ>O/O*@ˍ3 UgBpZ(æX*ttjɀ3x'¨Zĵ ~|tYeɉF0kp;:t$Y03K!dF gK9opsgoϾKiw5.قf+!vՃ:s2A6bj6Jj:?TnI)'VNOiO)Fm]*qihD&a28)$J $*QG)i1 eˡO(=,R8Z̔gN=:ŹjtzK_[ݙiF}ʝsy}%e] -" &i7Nik.lP'h@=A|Ns̵7T.-g@K.Wʞ/s2W}m~z,I>0.ߟ]|wn"g2 /my-ѲHd1=_9.p_/]\PrTiVb $~YO&-3_Y5~3ߵZJryXh_֊y6! l+3h;SA R"׶r:t?P*?&yɒWe}ȿν#/٪oo||q|"~\/~˼_V~|L ě 0j9qZh9d$\H r~^ߑ?M>f˹^&6n q 30j\~6'븙A ?xj)M(J%ZÇ/q5N8P'@3*6RM:G MLDz#o"z wU$TtʭMSCzvجQҒ0.)QJOR$1/5L'}@ p| ЬGl<x6FPf{9ڢ9|T!v[:>E)WÝb9.Beo$g/ a.J[ %ߪDE&hzyV̳͗dxJ1ԒT.{%xoH7.wz}7sˋ ܲ[l,8ՃE~DymrIY}./R:cUm_KQ㺧p㯟_ q;/RR2 9Y195NYVfͶSp)9JiTS2QR{?_,ܷVYûTp'HM4͹ W[==~0Ӝ널p8EEH@Lh-ǥƒTZr %aomc;*':VG2YywQ!s5c%Wֺ#(IR۳|oH٨|_A㶽I  e((` _M:5% >q} 1J59bݳ{uBu.?GgzR'\\j~;`TU&D2{D&ǩ?wJ~b\)Zf9L쁬gRːG<'JLn4.ޤkF$iXr!R1ʂZ8M9c jm  #jdKR 5b{o}Zp7z? ߮{Ito  :$2?`ajf7S$8F+My y (ic;YTМb:\M-ՅbsS1s@]oGW}Y7t՗}X 9/XBtx߷zHQ#bg89[UuwUuUy¶qgzs_[&мFc >Ν8L+\^Dq*lN|-%"^az#u˒U:d(%NPӖX J:|Z)\w,!J/tLIC y2s|+%60E@дPkQKnK)^mVyh8N /$rBi&DܗN؈4wTX`tm)Sk3hJ57jGD)g ^G#KF8H9mTvqRku Epo+S7'Q\2 ƠƯM!%vdDLnb18TP@^ 4hKDLq4IM 4`f$c<˸g8~{(Q;5w~^!w7p߲o>?]0MQ*h/;;aJAXQ/c ?z}2*/'?E5"seo.On FTky ˘VV7y%jͥ_DDR\6MMD0jܛ%$bӐ%͗0;%fu2npΛu%.Q,MScFI2XܮRъA&8pm ܍52Vd s&hǺ]KA9dk/py}Y%NY7/EtRܥnion-%DQD{˂iDkTr@ tK{x)1x+:/"JKD5hE)փaEկ,rws-%̞w%4Xsr!^܌sk!P85#F7"2v5=gۜsg-: 0z)?5 -2BUʔՠ4ڼf6ϯ^DY+¬sSdt܌azʋsB#܌("½RdZ͍)9XM⦚:`<6tPp8^J ;L>ńKkyheN҉l/ܢT&1*rg N+ wqxG:MLN6=/'G~'t[#w\%Ǜtz4XwYOs=ŋ_g%WIjm< UbkfqSxPwAms6e1#mHS`5O[CC4etNGcD K322xD̢ -(h~!h_HSB]4}kl6yB߼9$]oWRݟך@%KID*3Cz~4iXH-od%XQ{ x@iG凛k\B:+dl>+P?c!='Ջ[@hɃ_g΃nUZ񅩖|XP~ܲ G6UlѴvz$a)um3d* hN)pw.H+䨜^n]rZjКu9ՔfF<ؖhNȎ47#N.^-]ơ0oК @yadF pڱ|Cwbf5Bg&"|2YlFϩYP2l>Pࠑp{ž}XwzP#:\Mjv5-|d׶͂{+X?>>NW,зcАkv0ȯn?D8^>rCT+3X̥y8@l5] 9R3;x~kp^"*//au񶨼υ-׈=b.,j F_ fV5S# 4 $l0tQv-Ʋ:}|y!5H""{ 7UHjFu>a.̛NML׫WdAHmLJIkj\7I/o<([{B*ؼϏцղ?;;Q^3J&z*ৣWTm$;poۮ}n#p}m }qܙBPZv'k͙PEe?>HIT I%HsNKtd$ U,tFhIo3)nmH7Us6A +QWhM@%P H &ZbBHs#/K(=4un^H~&MEUoqvwʛNbeei׿Vfkpfkk()╊fkN}3$ʝ8ʐ 2ZJ2n?ޮ 2ʃmֵRY3 f` J7[ Xi9jTjՒIۅkdOzw$񙼜C\8+2Ifc*+l (3_*B;ל #- :U@C)j=ɾ3R{idFnM!>^ogY cT,FxzEe3 e\Ά 蹁NJ &~-_uAz ȬahZ10|* B.w7K 1#8r-pz~L1@8m>s`ZjQ>7γd x}-.>nq(6vggg'>ɫKjWIJˡAjӑFcJ8!}N4@҆1l!&+,=3K/lElG62갬pG˃D!U!R! C :m)KiF&TLG2ywͥz)[sh/( ud9RC5ْq{.*Q}бPMTk !_b<DWRsYɼC2\4TpP] BDΫ%<8 Cl޹i|\=C$ x>}|H]1A3c3%x+ҕyCC霋{)]و=Ҟ5Z74BWhn]hn~uﲻ 5(Ytkd[қj*A?YG5Kbɺ ̘3|BL.'8uZRFvK$LnNZ» (-dO) }SS3ۣJȡ 4`Fap:u}2:u<[|L;Ն>槠SiPs Je7pCgY.N&g';TFw;6S6 ?N_NGD.Q|M/ھcNyizxWx<]DS*ڱ<$W|\%QORPIȧcGO.ۓ;̥۟g.J,CX9_7V2e0ܮMV7jqG\b#:mdu둧߭T@o[n]H_\D)(PtcF(Wȱ0y!^=݁pxjqX汲 :vPMs"oi=% *8WH,l'~bS ya?ǰ/m P:J/-wJD,dPXF8/ ~4U]S'i'բFm_t:Z)@`eK`[,糋_/0c+OCze{Wh JRHDž[ex[Sv",0|VX+eCM&m+I:3$aF6̐&b`F[MNsGCik.&7m 'pukBZ@7jE}Kz#YK8)N2a-,Z|Ns'O%ǫ#/,̷cLP0RZYoOH}w D|#Ju#^,p{_œešBV_ÀZ("x_9e.US@%J Qj}5˩Uˆ К{_ҨL010`h9| swB: q-Ϻ`ӒqJ*88k)^K˅U )WDRqeIl $/ gmCٮ?T&͉5ݖy}:}DZB"$ghQL!! c \R֋>tMy/ex6Tuڛl$zYeKx¶  ejl_g'Nq_W׆r>/Go.]|uhuoY7ɫ zFYb;gU׳A1J2d9DUjɄxdj~0l~ry] < #)Mmчɡy>ɡYoK}xq'8ouWdtUסGCsC7`P {ɴs%)uӫe0 8ҶZD#$^2WlQ s94r706@)HpA DiJʒ'!zG3dwf&ǔMo8Jxѵ xJ9YG*A'%1BRJ8i`Ai`,fDZe(>6Xi_Vް/e /%Ks™8A@yRJ /%V[Ī4E„"@KPPI#F0i+// I.``?\lV4P{WFn _v7EN,a7#;  ݾdH"Yͣ:X>@֗@"@~)5!־XCN>CNDI 4har Lpr#TisAéji2.̮n^TNM~A%|Cko/>]4O[ϪL])g s-9!SNJ^<\ŭ ̉4bQE4뎩ڟcx[:HhEIh(Du@)-x[ (*c.z\^1I8N4#!!C(`k|WcX&(_= ]7jj)Rb=6 7SF|K˽ ED2s1KIL vRUr/lbPtĩRHe|w[#twz=K[{ Ȱ/ɨiI;5iݨ?2Y84dZ#wV26eҽh%%`7$XdX@ m4Il!ʰ1;oCRJpnJhTqy5,Ͳ.cQ}>D AU|O[mIM3S"4aj|9+AcN']iA2!޳Y4Jl%`_nA>g/@YoCtRkeq]< 1R,輋@!}eY:KK72͏m:nm!5%T&i"T ?ͳk8o`g=!nQPm:XĖb1Ij5PaB(mKZ9S18KjHĨk~KYiaZ2:F%ƒ8hBGZqc0h[D9ȑ0ÙaLb9lM#T;4H2=qnmFGv= 9QFV[o˦Y ]gufnōn5[*F;PL'ehEԘaڧύ`@t Z=Ds`'u:a#j n'%1Ѕ#w<1"4WQkuءA%-I$JJ(pYⅎMڞxNW{u;iB՜^K8aC(ivdIGZ.ZzǙ)&ՏRkE>l[6w2):_jA뼷«?\%L W9f$׹X'Y;_ 3fXW;>pTi!ot[ڤ$3SW_C+`DN1{Fjd#j DK%ne:pGS:.-t%B`mH>{920f3qo[V{frt8>1QbdaE4i(/ݟ3kkbN{.5bj8Eo|diNH/F38ʜsU.s|>q6*J*ftL@ޥ_cz4nʅ@ЯTK ~kR]qo$ќpυM 1DюM|!R&3V 09x5;SN@^\^LtaTX46֒Q(U_-i zS>/ɤնx ܦ ?eX,CD%->c:X?Zۂ٠<#ʗ$R*AENAjY[\*^]%\DÚ\Ƅ?|K|~[QHvt9Pt"@OpyOBXV}Nv͆oуY0ǕZTDcwW>Wߧٸ~pgsۋ7Yf4fHެh򒪮_>iWoS䋇!5:X-عHݹ? y"`ˠYဢd /JH6 w3-kM9\%5-S{c/Ğ('8 )IU |f0S g3Γ/_?c&jJ)^jebt Ɣ2l2:h"zɵRSd#njjj^TYZk8XF-$PHW8[JƜg HA.QeY]UQ䳶J"0,F$Z8K%mB( p^gbͧ:@+&fॏr9ٛA2?ZO s z-hJ=(eW $JT~|Cy|%y{S|-|4_&0_ \u".g.L..qLA7:ZW“$fDerf v9`z|&R#l1B_1L_#@+լ\|;Lj&?LWT.Nm|8.VoR6(> \`ny̵vG 0kh7@[kwJyKwaiC欐rsÇ6]?ǮKM"-Amm\[RVK?zY.zzp-M4Jb5q*N(hcTjᵶ#ӄ Iqr5A2~wz}{Ѿ{""EzFxkp2%`L^'uz[@BQ(!1q s՛I((z-b21Qƣ t{s/%Sğ 䂢UJM)|#L-kQ)[) p|!BL)`DΙVGa[PZ",Y8#['’ҷ[(Ibn +VäDjvԂ׳|}#X-W鉿_J#ڇ')C;~!+q$L/&ޡӟ>.ATAXQC.&q6_,C*vg4POO R@S\/{ϴ'FRΌ6j= DMv77Vz}p%0n{z T *`{߲}w_`lT)C@,rRkRkFk t&'8uGL^\Pz|b6`ڄpD>w?C@7g}2O.{T;5TQ4jB;b^]\-q4} jW|O:M+5LjJ|t7tk]\=(z/1"3^#:ͻua8'~8/ug %y-v C)ٴ1t]5N}rߨC!#YN|)g0s>Y[!m-iݵ_.%@%ƬSՇ7ƈ$]{o"bh߇#=~eӮ^3PksiWbtDL@@aG#߆SdOoTH%#RQoC*[&RT }c\f\g QЦsbA>:z$kN5ĄfbB LjFKϼO$++i%`Q0H*Qq-7srAwr`:Xi3KF:*%&HQUJml(--UKԑ|FHm HqA.sPrf5w2Z)QR{S"奴֥ L,%-' -'WԚ::^B0Pcz8ژo'E^;/R_{'q>&2'Eho5IW_Nj-\,wm#_aft߇*NUb;lj\إE$egf{^($@i<>$~G;pJZt#\Nn-u#kIMB># 8"B'qfupGV⦽;+lWyDiLB~#__mJKe"cRϫĘ_(W9!WۺN!#9hgJr.\QrPYx`DU//zWeDTŐÇ+}&vYS>/{ Ʋe_QЇgddċ==yqu5/^B 9{RQ.Z'p3>Ne qsәg >f3\}ԇhT8&?;TEf1i0" Nj5\nȨtk orhN ƐogW <L ) %'Ԇca"2 GpKNb!VR` + 1AA{4ؓ(Tfɨ5& N(˚ 1/EyV$p+){]?SngI|ݣZQJRdŒ{2*)A\/4 %NRN:s{O#2mm^tC!Wu@r \R2YE2YIRL=QЮ Dcί+EgtUnPػvvWZHm;C^Jnu"9D18tdq0u)'ŭ:.wt\B(OנAdQAK=L8mI1@o/gtRoSpt3gOSY:hD̑^[DMׄц'}$j9LӥWD+$U@NWҳ QbP;bS!4nL#j2+wF2FMz2&l!EӚ8B*k'VrRDZ!_|X OȆשWLmo~lLAwv<㢙<-6l 0iٮ<~%3!FTn3Tn"[x4TQ#<3Nw؟[$Yux1a':xƾk-v?mJFөm%]z/oMRx<{'. OY:Ph"n5$WI7V'3$/pNiIjާ{рe 锗ò%Қۤ 0^M5l.4ǺrYk8،Yl3eiR s&V$|Lg߳_K> gn0, |z։1@9pwum0zxZ1ZQWOHRh{D g\ ,Ԕa,޿?o (ǻWkL޽7ݟ՗ YfTqR5nRM1Ucq73 D/B;dJoãR'X[oDE'B|QF߯{&qo:>!My0IC4P.nCIZ{ @J'N$WhdwT%"7낪Jmra1ljA2aHDSH#9gLN6ۡ6qUh4 Y\٧jG\_U4DJOG%_HfxyDyGVjT 42 ,a!a*y%kpRrBzUjԤsw}lEL\q<ءR᜔q`̓s!> h{b CDk"1fH\&*+og4 V U7EQ@5X1KdGpRVp瀌: 왂T 3L1M"֕le^)ZlI]3=S+PqINER:3U޲#ZI`j צ[\&dO9pBs2Ӹ"Z]Ȱ/d \HLL>*M##1J )#dRJp)%e7Ub @x!)N4i)(iT*o j{bg< X9(4$8L8Pvجq,@>UQl4p{U8m&m, DƠ}o\]8, QF"4A(w h .+mXz L(U4 5:Gc<6BcH\teibi(\HA@)>NwqcM% -CW A 8IݬsR`a $e^w ƣI wS ^11oeRu/X)+gzt<:Pǫ+ _ZI^3~fy2n$ćiR1c IT%%cz7I3iu:cˌ93C(Ӟ* :k̶vJ| űK4aG% WX:[ C|[Hm o 亣8έe5{@PciE0s1xV[`+]dV 0B)b:#Jx1`m=׊G-{+"w "v~} );:iù_ۄg;J>/-QX866 1ٕxLfd$c!i)J %i<؛6R\+S?sW*qD{31s,CrH,bu"?|z5q NO5XՂ㋈H,U!'5|Xm"'v@ BU CB_j(/ ۜ:`^#W#ެamV{.Иr) N ALpqpj˛LV@~h/7G[9fi\R`T!!RT&8m@XÕSdU6"Iq٨Ԅ.g8b`~; ;PmrS F("দMiUw75!(L^;1ݣ h#z((ٌ]ۮک ǗU1kB%@x)6KMj;y]f+:\؎m c9A}tc WI1)qU23rbJk4CqsĖk!P&p6q[o_|5ZUAx3r:#fvmT'??K7܇ɗϊs4]Ch{vՏc5{Huy`0r2U)K` {3o}l\1ֵdwj'r2 vYBh3Wn'/K.=9#Kh9-'u Wޭy~bnI$1P- U&9]v[QgA OҙP88r0 Ow Na/"Z˒LUm\߀j,2ݙQ*SA.˻5ݕM6׷oE9u }acj`FfC&n8Q؛pG}_?p7]0ə3 )ь~kr͓Y5;7>W)'~fKb@ȨJG c8 nEHo6G?NsW_qCSd|xfkڮ^]0e9v8pif#: ҙYacJ)]PwqWx:w\CǶQ ikrlq\3-{<“{7v?I/3yzݚj`G7w^aIi_%UսLYNI)<> *)2`0nEs^} i+5ĺM5TVDv =0c߼%Jb7mhHDz+N`u%|舱tBz+8dcYQīwě˻:R|pݚ3;ړ':+)d9/lt$A@RZ0Iք!,Ҙ&N:ǹB0KCDaq[`ŰB]+M4SY(EVu 읙M gT-h9i\CJP'[田` bI{ƴ'-Ra@R\ +b"'9ة!'NS$J V `@u޻+/ѫkiH Xַ4<Ұڂ{l- #|8&ESX'Z.yYX3԰77,(* Ao&5BqD6uSv~{_'ev٭FhK,,WO?MB2UxDE#Lx9O~k;M!voKk1 jx{QĕqG 诚̛tS*B͏>"0NP .{+ApzNBI Ɗ،EqT>LVBcEՄ *,m'"HdR1LpfFR\JkȵuYZֲ::7;=#g>3b`$<72VIפpx4# iş!M 4%3ZsQDIk-|{D)&A^8$<p~֢jo8E޾p@@O{ԂW%ߍoslɟl/ͬnbz4 ^}ysC(&oRhX#AHu)7bJ>bGrv8QZw"^1oGgK1a*êvw1 xW$,(tL}nA6J9ٙO*SN.@o[uMȤ],^+4,yӢfvo434Ӛ4GI_?zQ|8҆dc3Xz#uvszrp@h>n61=ETKd; d6UZK|W;hoA%Jb9"O-[L.<;Pcz1 ~K d݅Ydͳ?`5qQEl,4ds3i:_~ˊR%Q?(Rz7ss[K:0Oo&p $N9::6BΦ Ь\ǕZmdO޶BP\t2ENbΎ9^ f6IO6~ޚ035'"'!*7, E"]'R>?')~HV;2at=%ܢ>"_> V&wq+.ݬHy\yܕ=fٵX73RA@ | KQϝ:~1*»J"xn^#HL|FPX&:pXEK8ldR҃F2b}_s%uϡaoQ2?0fo (}};/JR>\SOaO/,$=ԯ$~ 8*$M~;yxv?c|Zk|' z*ڰ)C3p$i N'=b#[';c9Fh̛1o~`<%$Ac?WK7=̾A$i':t ϱ`%16]ҾWY-A:e]/Y7V!uzF#B{΋nxw5pw;^_G|p⦍egG'_rjOSĸ|'˹?_^jN+ux+0w4(R9j2b)|/!<ͬ/|ZMɝ#x$wΫGr&D lpXXE5X$co+B\q9R`FZLC8n)Uvɇ"7un.E1dq7S{/ֺ ЀN75JsHs炗Γ:@@yP.c9Os۰:\ooFuF[Aqe CSjH"Qs92E!ATYghZSۜY,=FQwsi!8yVhÈ[CsmV[4+X:^s, S15G"\w([kuRRk嵁kU'+g?]CX(̬d#VHQT8u9FKrӈکzOLpw-`+ɔNJIR8Vk[@܀"`K #JR?v:XI?scèʉ€ʁo SRS¬ Ċ*Fq/D[DTtAi)#rA<W|^9 o}eXl|wW'7&xg !$*c?vTyn^,\݂EDddǿ1'?ܾ|b6gG['4w˲Yb5< kD*TN & :y<>mPS͠ZH"ABDyYڃi42;P@" dJa=5f,؜'RFnbXwa)Z eylRlZ*!TZ N!ca3Ok Y -A0Y^ h0}*uLrLD(M땥(d`hʒ=,klqԶAjҚУ[ST=FQ IկW`@0PWu1Ӄ6үK(")8ݔ/(_h*7v/\\(#P+BπP[r? N;;WY%`Ji$(WkŃրTsp0כJ(zpG&_uOJmDzM?ǣ+_|eյɻ_D.;}/& '3_3lYk]ҲjKY3iu4er3c4ʴq|<)W;#<)/0]9chSu.O[.Ϊs !˷][ڱe4;#GeKwx`G&cJA< D 6V!@7" 2{OON_!Xczd'N'1ߟ|'4sAzSYXrzbi`qEr.ƋubCƎA#J>oJ5xXurzAZ>\!Bվ"v0K#Eߑ{θ/le`tû'~,2|ѧârH23W陙Ӄ323pjm摴Ggd C/JJD%/q,|(xJI0Ҝ.,Ş7Dӎ!~^֥;}ս9V]0fJtY̝CV7j2-.E48hZb;K]1R]i6D)J!~m-&\ ҃/ENr$ՊK48AwL9ȃpQ'ZoEQ`5 Bj #!1R)XOЌ­NPĤSS4*Xpƴ'+ e4e]wf0J$׹52d21if8y#E=[}C T `a( w2k㹗rvo* /"P2NK#ф1%em XФ}rmVZ.s{wTO`0_ sV'guvn]܀wrίՆԵ4 v?¼+{Kkf' |UI:3zWaf ^MP}U J e\IzaZ_`} A9ۣ2p ,y{=ͼحJR*.ά63T'YIm~g[؛Ÿag Ol𞪹ZCƕq&*g|ٴ7K_:ٵ1_/` m͔(i3:c\ ݭzj^p򿻑5eXw`Q:~=hVox qh!eOz`G-s_~%xZAG3(@|ŏY?%s$o WKrl:د dɌNX^+tFKP<:H M1 ÿ9)y& s;|cQ,~{׀Fo[r.ƤĂ=7i+ß􊿒Yn{Hq)Ba>T4yzzZpJ,y2d'BzC;L=vj,rr& ך0XQ%&<9A[l9+HaQ\\bWհBqpl.5& plq9όE& پ߻1/\w!-egHS λ3xWe3䚳E7GxjR4HMia>쎏dwu,+G (AVGQ 0;Z} FۆQ>ZX麋`93ieZJ6c^2n _>=dCCX3N5˧{7vh+E-2!L`,n;$[ YESduTRa0}*05F?Pj(8=0x76w m ;6F IcJ;A;֍ кb3tRۨc݆slADެ[zHyC4aLq{zh~zv1 X5f3I _ t R a0bH=~FA!Dec,ϓK9(kf5_-o+MNWf_>_f/\ -瞿~瞿~Wo&KF&VˆVoV{3Kmd}Ʒ|3o+/eq=Mn,aR cTv `Gua:<ػ#A. !s:*D̵"'"*Yw>aE|1c.0s* ǘSD"!!IT6DG Z-QDT6_"~smxF(.U4ZNHgbzXeiEAsZVMZ>ݻ/F,"}7Sd!0Gd\*lDSi(ER x[lYc8|no~m^]a#6&#w;| =BF)pأ;GIrew 0duJ[D%҄6ӡLQd}yՄ|}ʿGRG~%)#C(ʠBmrJQ %5)(~Q>ݤvBp^р1G3IA\]IՊ ce-f1%fb/<8i>Hw!U-7[kfIn%]]ɩ rL*KV *UL[Jg%r,cE3% BLOgxG-_y ‰6+AP6 DQB,){%<>|tk=5ʻ?6ј4d=DA~8?8q=v,Ѱ,Kĩ!1KeP(/q2:/كh񱼿}u 3:7 s\ UyY`NlW˻jWҎ i%|[G⡲aF)c1IqKmrX,Lέs " 2 ܲtL($w7HҜ⁄Rmg安BOԜ)Q{RYߋ 듹Y U^0867CeΤdօSV $T-o19Y}s wå#B=̖Z}w{b-ס !rUt-ш TS3RIȇߟ,\k>XR|``n6&10ShVHܔ9t11r4V, Rs&DIk9ԃj-#$GЭlþbM郋Wk9# o-a&^ a [捵2NXyK$/jΩxLH\5W xsQ^\?F@TO[а##h{,{e?=eÌ"J F˜yaw8+6NI SQQ(|%3ڦ66UaaA =9?.]An/Sr'Xu/.N|go.śUo|?qY}ynqXu> -t^G|f@Vx[c{=O6GSϔ}>@F.vK #:(_-r=S,a{>o,%e_]#+!ns"zP)-1Du5$\J6>A+f0 .JTdY%'-T)sIKIY)qJತbU04-<5{U 5+A ^1˥%sHZX zQ)Ke91vĺDyBXŦ` h']1Vi& ?PGE!vhcK-LQT"&$rT@Sو8ÕpTRSF9.\sM 1$ -5Bs$%B;s`+B-a#2[J+sj (jkx)f`1 ˍYnX `K Ω cKV!-1 -GI]jB奐 l$8iVphW;nz&m7G#!rFN~!H=Xٶ4rS; V17% _zKv%9EKuaVԸՐÎ:(}Gכ $x n}v߭3NUI|:ѺUn. r VwfI}Yhݔ_dօWp MrWa%_GΖvJ`P/n/s̜k̳88p"S-39 s&NәWILUnV sNa SC~bx cɺ=$SIIAhb*NPe)&DRXʀɔ?r%T&붱 2 8xFāENx\K6b-Bae΋<'Ns Ͽ!,*A¾T#%#.m(sεѰy[PCSfwWwnf{ʘ閭_yG,񝒽0TV%%@>ƨ!Ú Q`?t#ҔK؉xag|*Jbc{2qۃUŨQo54DaNE%˧{W~1Z*]G j o] H[/3ʥ&MNǛPw.\\|}[C\. VVg:*&_O2pꥸa*M|B) ZT)?NLv.KK F9Z*YZ q M#nCmxcL0٭XߥMSLbBmo-Ki[Vg(x?%5F4vy󹺆.d@* WWWm,# `|6;W*UbS2NnGd}_}Rrߡͫܯb4ՠSd܌dXJ,ݮua~X_79N"e -U.{K#Ѥ1E@vh -F6ܵTaܛuKhukc!4Uu?n 6XᮿNصuKhukc!4!ޮuBb3tRۨc݆;:YZ6F ǔ^>Qj/}@[^__; ޝiHF5 <й_]Tp_;t IpWjscv?+w QXC!`%#[ :GAAM@&5QO=JEU%,:MJ6:1Yf-lF6 <01$nظ1tET ؙ6@8 VjDZ3Y-JU=j)A 9`q7s<I!L9L~QI-X%iG5*!yI=(^Ԯ|~!$JrE!_ _)e).Ty:2S.`>s UƽyZ üa() WF&6F ?׍\1Ndu"=A#J[UgxEWe^2{* f!LT U 9єzn$ӾV4/R _yL~^K[m|75^ׯ@zx! JViFk?{ٍ)/F0]L03>l#uu=;7 }SRU Iۧ%#ER"EReqZe=uZdiӶldA`R&!%Qsb !4:` C'p|; y@Kg$:9GCfOҞ+UNO\d4 텓BN{]&skF5&=4wZK-V~Ժ G鉰iJΪg= t~_|n3 `7.~UNߕߗ<\\oً3VB8I?߹?>?;fE0k[92xayGDB-*L#+M2i2d"YrY: ,S#Bceer<' RȱErJ.mˇvWZBݵ 4sc%D,H䆊MDJQhtRP^)R:f&17I.`hE^F>Ȋh1zm*!$4 KwM6pR8t'ysgN7(}7|NI"60)@8z]$kUYhۣݡ{Y]k(Dpvgd$KWNh6jfBVXM6nQS[%0mi]JKT7gLgl 7<{${::>O '𷎎ѱJCS)ic[S+V6Uba'jIJz,b5 1z,4͘<&{Jn~Y4gW\f6f!2J,6nHcq1}Vf,"J _*FuJ4Ǯ&n{5k`ƈr]xP /u\=Ͳ1۫73җ7Bj[eFJ~O@'A< 5C<5%Ѕ!^EVӣﺒ]X׷?x"=irV#Vk7*@ѥ 0$|+S"J1Po(1wg7lҟ`F ؃eO#mUmFھ:{'8߆`h81܈VXzT-"%vgAEٰPtcEWD0JW=_ ^ٓ2.D&IJN3鄩D%(E7*a% 5V&!!ScGi(DHd2S ]+>JAB$^vzMNi3U2 d*Jql:^h+p\v`a9wfe@ULqKhh^vGkgQjGIV%w(mc&,l#".;j2:2—vт4݈L 7Qh9؞O l9ʮ|R#C6tuE3O%}RJȉ?*(/݉6IZ<N|Z7rcPŠ9^8*6uގAN7zsÂ<ܥߟu)vL7Sҋ3B::\߯*'g+5yq7ԇ#<<[B.R=_ɚZqV+fL8O(i"mlWä)ow%cRzJe!$IL*)m̧JqC'T)nkQ\'*JZqw-"ϲn8!}4R䳏!z%ADNFJIdd֠Pc%gB} B;MS> QTARp0!܌ ԃ(V"i~[ӕΛe MctI?捘50 Ke[BvZYIa>%=q:4d#Z |F؎WNbZ#>[[HwPNy;SwӜ]6P≛nwRnV%Vα ^d lTʁ0MP6 Rv߬'\2S[إqi1^wF2= $>qߞ[I41& =Tj{QB.C[Q-Kue<9ݕ eBͷFp,{ܗVɢcҚ1wUd =DSvgav&yL [b0-Š#O\#.^ 8F⯟o-cYvlŵ?,Orq>w{7?]c_vϚgrlECiwrn׳sq;qQ9Nx)텊wt vyqrj1LW W/BȮ C )LC}:l H=Xg'h Ⱦ8G*dr Q  +BҠVI!#phc?;,7'o(eRJ|K2c(A6+xTV Gl QDGWa?n 6IGqsۦB>HcmoVAaiQ^%#T)9P-)l<ҰԷqm3I%0&A%bbG:icR|t<vV` blƲh$@EOA3 ؔKdlZmp14tbgl5EA ~e51M#Lw{n<R1@H9߯Sܼ5R8F }#;>>׷fX%,igHѽhlfLE G8wR LQGdJqLfфEѬ l~A z R*lR܏e7(#i+_ x0l˿nRi?yr40W(\W7Pu2c.>+?9>\|\m[BrlYO}0e%l4;r<%}}2x 3W"•G}8{DKS؝`"#h'k#5Pz6}C͍)r{&'y2,\n! 0A)Zg2}&P$aןh.Uva >ZŊ9)(@͹%>֓"E`-u/J݉颧P+43yS<9LߴL'eFx7J&:oAJW6KR<3Rj3wo6}0&eߔ"0lӲIN&9mo'E` Z]dbjUYF R M d|F4 hXGe TԹnXdҕ$X$i +Bb/Fؐi>k+Zx'JAQ!v QC:P\S {`FT˘4X+Kká1_~dj"id6I}؛옙,E IJ,}֮[CE+`3Rb [Hv}м90M!/ /IR QaM,OHすMϿ,~t k4o&iУjlX3 $DLX=d3ԁ*|CPQ 65V+s>7N@J#`*(sdBd혨2ߎQ-\\:qu{V7>dymL"zQ1 : t(d腣r3Ε.LY+ 1.py 'UGat~.)4`oД܇Omz)ZTyV ˃yW/kppGi)("6{/kݡ6SJV_=Wɪrx[Cg\XrmT @5X Jj[ڥ4ރ1{WZ^ZnzY5]#W(t $^AֱHnpg1|CO5Gֽ{yJUN}iz؅7!՛~amv_+V,o:/F+T[o$gJ.>u-zw~)r/U)5;HITU; Y/zTr|<[U>ܒHXrg]]ٹRh4}hohٸ8Q^PsV_~8iXqG7hR@F(:2dPU E^ё'_m!Ҷ)Y!@E٠ե$gA)ݔ$G 6.Iza(U_QMTk?9 ?}k52U3FU摫/4[!B+%}L1&;_6{r^ 09{Rœ.tA;J4dN@L]4X|73= ;fٹnf9*!'-Q;8kay8ww&͖TaeR H`U--&E0\cE#!2)!T[<038t>e:ii/upW{m;D8-a ]׭lɠ}BuhѧW*k즰ND T󉋬l Y"(.|ħHiwP<<ؐ 5>d$R dPF 5tp`3hIcleI'zzU #txȌ:#kFa.^htɹ>edz6I\ |SzUW*i/./;_.+?_ qZHiӶt< 20/<[*oK^BJ1E*RVˈg]N$ 9SW%kWVꗸ,mtg]9 Fs6* ZTl2!&LQѽ1v:+Oޜ_/o|q~XgXݭ]l#Eqopר,NtHX/wmmKl /#)ڱˊ}*{U $(@$."H$zgGeⲎCeZ]<Ѹ.^}威=D=oo")h at&7c1h{܅cҧ'0_bvak.. Zld^+LU>p`5(pԚ!Qoyu Lxㅡ0%| ^KgddX R$ՁgP_Bc|ew[|Pb}Z]./!`vTTzѓ{/Dh؏!#/o.`}gÍ+!C{nB\pvX`8>"J%6k`bno$ qIQȯi2p(۽(j>ܲNo*o~.6H\ @3ᢅ bMq=<ӼͼaI[ H{{$C=ieþZнfNI,.*2Q/:@`Rmń 0i"Υ>4\Wfƈҕo(f7m(RºT^uɂ'///"jE&-٠0d$\z ^YNu`CHf ~30Dǐ5}Ыͤqv8Hv\.9$e^ER,Rdl6~P*buW+IQIޏ+9Z:IF;ːqYRB%Cc&KQIԠ] 8xX=_U_J2Tb:dQ*[I6zTJ-EPf0CRʡӣq@lMwg0 &Ѷ!%nZL:t$_ՙè\Us0פa9a{9EcX*3?[cB}\^>݇H}B'FȳB%YM.#BfIBKI[$;уE4%Ǟaխ\f":_=i9 XCMȦtIREc~E6K`:/:&?'&?\?Lg46qJׯi۷,P m^2_9l,8sZ4Kq u=% c;9M?<՞Tֻ2Nկ y*ZFG(Ⱥ 1XQźnpJi̺hQVuu+A)]bSFtc֭tEZ*4䙫:%1kFܾ{\)ķ3ꃉz7Ѩ?y9IkfE|.&M(vҽ-٬eV)9ZUϔZ ږ=wW23m+@{s|59wtC_'B=2D?'q.4Al*5z8&rZ?{»^[l=tozPK&kٻՀBuPua4;3&He^q~ʩU@!*FZ*M5<5äb[OmbEպ܎<@a\~Qx:'#$Jamk"$Sswha)Õv L^i<2k^3n 2I-3NSv4Vs.$N`lͧ%Sz)m:P6a Eb.qE:9a9T}Q+i#1RTJrw()( F44W Zks=O&u /[C)DjShgAFXNZPcjUj^ Z2>(eKMze#w;1m{rxd5IBʼX}x2.,@14yes_h_gK$y 'SG9~|0Υ@; N?d}SygY"/][RBeN\枟tԦ%0oыizIXB2_{[y@@%άKNme/kJ#xI?XFev춓Be$xYoZ4~+ zt9YʞT.]x0nY4h;8;ΗyQlւr\$kw>cA I1οI%HM2O ǘcp;<'KЇYMXef;TnEv*>W5xD&wY/ɎJk@ox$[SNQ1Iox$ѭhUt󌽪Y)SÚEoX'552ޯϱ>m2$H]a-򛑗ڶ㔍3Ps;()ΊN;Ԏ@vgDe^،i h<-oJ׷zu{N۲$asD!|_܇)yǃS/Sqe@-6wټ 0.Ə?jVWl1>jk͋uPijdky֌*6E3R6Q4<=㤹: f(r~njR6gg ڼT L2&3P^2NE޽œ;sj /6}b|j,pQ;B `r܈KJy]NK:|8a8s02F%1Y:KPv7nt TKo?egɗ[p+rnv~W_nm5'_z?'K&dUbXƩ\#&RTK* ԙo !(),;bXaÍ+&.gTK$RMd>t6h߾y+M Z;m{뽙d޼l%qő5?8b jL,МyÈ LR"d<hyȧHzѡc5'TFjx?͵fJSV91X)*ar\_ZeSu_3%&_R~i߫$48J|]R =+Xu$';?f䙍3*䣜gK(:gr밐Z࣓tPD#3.E BЋI;ŎLZ.L]ͯtPޖAq_ L|!m=Fv[%Xa߼ >BKJp:QuIyp¶Sڡ5s['$[ÓwZ|G\ePHKCq`:qQ[⪍J,k ZA4Gg=zg/ogeЮ7M9G> ,k/y[ÛW__u|wz~o m҉ͤ۹FY}-QmW\VZk0km.,ڪm%8W0h,@6Ϩ+x*:6q4>!4c I>:{t{aA3{jekJC.^]m?dZ% !F¥Sy]~j VtnP]ǀ兓3 ܡ{8 y?P^I2-,B3Jf t,J+7@6s"ǥZfqw;`UxLD̳B"}.*]g4GEY A5A$L(I0i6X,e(,u2}^\Dg{-eyL)Rր3 FgY\f8;SRF`pH&t3t ׈ov-x1SUwr`ΪdoǝkKwf^ eXY&HY! FWb٢#s8<1fh5X|O ^7tĨؚJ*be[҈'Fp[~CXQ!~ּң>@qzvy~Vɓuܺ@4v;@uwHb#h ȫKP{vYv#5`k`wA5:h\7mWW7lMHo-&Z|I喥u!7"[-;_W]P ɶ.Kjzg4=0$@;tdd^3TӆGXK. 9 20sF҅g>40”UmIu` VGٮC \5Aq9ULefhWxPdjQETLn-.5if7-lq tE䳭Av׍V?^ۻZyUp%[P(Dut8DZ͌$?42^Ov\ 2˗=^Y|V@[n܇M#x7koXxDIo\߳!7Y~9pcUBPc{!pr3+M u É 8&;oG>y(Vg0HCM r?6ZѽD,<*~UVyTA G Ku%ͩ~a?I^3n*ŏ&{E*;O(+UƓh:OB`Ut;u ]Ŕs!yw6LzϮ  ab3 $v]JƸbPJpD hhP<Ä ժoA(2>(Y)Ƀ|з9ijy Xxrm=yMǓ($\Xd"C7\Dn|g7& Ngg>@q - 0{E'{c7f‰T긎Aƹthu5rt@n1qyjSygY"/]ᛣb|sEDL6qm~ׁ{k|mv'lGc~E@B`:/:Z5؇fi Ws'i/NߺQ0WcO"ؼQܖ8wƝ8fx h('pm2(+A ؅Dؤ~+ KkOGF|d206Y!ށqb6fk{2RK<ǚD+hy6>;g8J'58Vڋ:gf'w?]{ z7VO ^ӷ:??ki/|jxW7>޼z7o~˷Izs{Ugh^4)cgo~_n/ُ/޼QB$>3M{)zdqU/GG/no`p=ݾ''}@6SA$,$!%rz=WD{LJ$p:Ϭwo6tqiOh"evg$}T1 l-fƟ{g(܊'g`!#7T٤/J•؏TXթ:A\_ ;mfr66X(yJ+h]9OνťW~ٻ4W(>w=r/TRڱ+v6*Jخ==b@\f`# h<ܦ9U?[5 ɗƳ0rpH>u T?"UW,ij//O@|U|y gp՝>߿9P4U7H(яf`0#c70(\ >V99aG~Cg.cR|) >qRpܲ{>}7LG0&߿_ 3)׻L΂]»1'7>/qqoŭ2ܧ"Si%3r櫔RNi:CkF?;Iq= pp|"Pn0 o< $@1rcA9VX e9D5kwYW#i~VNB%"xSja@WAJ,Xnꀙ heO J6cnZ =hƮ;y V2~#%*Oef.3cQI]ovRӽ/jօ\.h]U;6Jty/Sn*u9 gkʇZޅq"axe~UGy.`3 ,nv5) 5h`ؒ8Xy~Myo鯇[yח緙_|@K}Ii/qwЧݳɖzwObo߿u߿xn~zZ_negg>>K:P?MO߂2HgfؿQwK;7y[X%??KNBE>OoUմA*w%cR *98xpYQw[i~[pZ5x0v9&թ\-'X9ADj) ݁j5jiu3EaZK_݄"Ƃ6qhKMFC\cLjμэxZomBjZE!mmFp HsecHLLPL3U ܽީzŋׁk*7չ[UkW 3MLgկ#PN<ױXEx,," $E5{ +F|Ko`M/oI_:o)E1^wʼn&nn'^(\阎q@,{l%BTB". )!uR4l5h#"մ$#}:+%Lb|{VS|*fw@1e]5Y"@_>/)4q(CILJbPRLZIt% ÎaSMP7)Rk,6BX)4&X/uZHvq5~1H)YgIU,땂lSuAs wy~ޮ];Ⅾ;r\9a"X\ l4YbfS$i ԙ&FiFx@,eJrO#ʔ+Q֝y5&M=>e86.y _^vƷFT'wW]? C&IO:2ַ=AFKLz~iJ/+y .ڝ>im?N5X$Cy4p~~WP0jQBPL Y<[9\iJ|Ёyhi$voud.TC3V6v/&]F(D /zakp3fGg2[dVm%!\n"&kwxIcifxABЩ^RM*l~;OSf5 _a5 wVRjpViK%"ZhJ\@FenV* r6Q[+o,JTe {H VjKSypL")1"{320& ОO-HCK1|9L6[28A8&8-(+}`Ya8] L={'nW&(u S\;uqqX98w?a 'TbcԚ88TɠH1Xm6T`! 2ʝjC4O:VR0cʶ++2b3-sIo^҆}և0"B$9Xi"H9. A&IaU8YVf'˚1z#T3_虄m,$IKWxL{?0J$~ܷugaGðf$kF j1. PC+x*T(ߏ5a|pLjT02ws[܏v;hoɑJ*| G w#S)9[+&5v] Fjm֥x \>[Gh+NHk"*L6Z?G v)@ % 9Ze!,EإF;+Y^R*ΔuNlSA!s{b=A\iEkEz3NRàkќL&H3*( ,,c2';.%1=PLZ 445x _8ˌ "(Av`@id~2f,LUdyR*C 9nI]5LQ S.9[0_v^'H0B0.Jep+!17 T)xTֻ]:':}e_=3~]~/ST h=5Zc`,gK?kǖ1QeU0 xXզc;Z5Z Ř!Es.[]RX!'föbU< Vm^0=@)j/ YI _]tS\9LO{ss6_xk@*on&_лS񒱧r޵qdBe6@@fc Ĉ7X4$$3HjMM6#8:K[a2٤cťOTr`'suO6lm;Yys[.RSy{NV={t6Vq ^bֽi=(xɾc랂I>*\q>d(,|>iEվf4>=o&IM|5,a9s3pe9Y)07ȡBp(َt,pc:NuR%ECY^h0;qh0;Eh@jˎ[jO`J?ee nX;RC_RQ-,>}w+]sc(R(;.DiMEASOen%l3?\];#|2Zs1aqvREx*„uF"BH9gj:+_YƭQM ]|˓4;nM+U?05RE%09Y`tELS`=dG eWU6 }*7(C=%v=!$'$J[-%lFIM#^R0NJc\qTG`ZR&tYӄ*ڵ+~z? -$WvD{X#0)`R'^>1#[5HKHaҞ x<ܗ=uhVv۬. ulP{{l5] ڲU]dATz +{IIA`ihHE dYpz)vipD!#TnOi8//Eྜ7T/?>Y:l$Gk sZXg*Ƿ{jA.iž g ru= n ʒvR"4hkl:7ٝ˧JCy.Ci(ο%mus2D욣͌3G[4<%P2YX2f{ƖϜiO[1A[oJ:U@[M[87%&іO[:CABȹ%u DYe¨pȢdc0W'uCisi=2[IZAT'22Mh4{miuM>+ړ[eS,IKczenu;[#[y^~AOSw{7*0V%ssY iqRBOmću^f1Sq]? rr V$&-fF-ehƋ҄xqL+Q:q*OfL јnRjc\PƟrV/RҺre6=N9p`/yH_K;&S&tR(]tsd~LJ[3ho[EBM8I4*bV_fbt p)VsN"ү"ɒq>eg2VgLA2Hvbw:*~s,X٤9HJD_RF ^t"9U9P]r:zԑXJG诪>!+BU?@:XљIzm9=R{ዮ: #?IQ0)M/Q(܏jK xؗˌ8L%2R9^,^`9e}KݜHDs Μ\IYr=O? 1;JQJƎR2vTNƖga NJAQ4%MY$xa?cʑhA!ǤYMj4 K6i2i.̧RLwh 0J507Z୍hyLs~\GWZun紞d n%56Kaˉskje'*`󳖧zs"JhԮAϫj zV%X>Lfp*D%/ MHip>_%HB5SXGJ9aYic_Xުvu5y;TFZR`BT->04U Νzm)s☫NTtȕLDbݥZG)wog.8xuz7^T'W{?֨`9߮.6\u}n2 }<8r=^hf .T,3p$%}52fM AZXbm'"b>=3m}Ωo0+,Ӗ!#hf!/=%Rc\*. $#_OVɧǢ^=v#M/X9gOү4!6wM P1{^\,Mn FYbAungF`~ @cs5f|u7 lv܇_>I2w6{;w{v ;LP~䬂8`/z8{^Pƌ802L ac$DY6vK8|vSpUO$RR~FNHiPr# ⾄{=ċ|a]xֺtoUɃ?bС=g0וm?O5BE3gvmnb?9\j{At<IAfvrRٽk[ +g;}oE\-U+}Įe_gPـ G~k;5\wky10@]X wmVB5tͯdPn?dW= /{d]d .Ǔg*Pv|_'pTixmgճGfrGkn6?v|1oN=zX$LXfy6;Na )_ϘϏήDžO'UloR:3\6 O> /khS+1uT%D OPP4Zń]\5kLlA;E_~xsrv0:=b|EK!hKڸ/isuOUf%U<c\xɼq EؚxXlw^d+R\;ih~IbR o ;:]0eׅ1C,Ao>u<7PT[Y(h%-:7>,G$|EmR؅dfӁ~n.'MuF4d)N!́9rMS8hp) 6] ;@?w=D( '>F/E?; HU@ZE#A!چcqm P{c3d!FV0&* ΍'TjmF+DB !iBHԞ d!e2z?K& )} Nax/w ~qp66pJ( MO|(iP USCE`؁|'#$[@;2\x !yM̼hkv)y]XWMӍ |tQW0Y[NM*۳CQ޵` 2ʂ q=V|x᷎޼T4mE k Z By!W܂I jC]?:$6͐ ^3Σס:UZWϯ[7Qգl(Z7"\IJBxAy(,"# ^j^@U) { c$έcSxɶr\q0`ܚJKot+.1#-#+=`Yw@0DxOR!" ?[brW-Wd n$\z.`- ͅM5:$ X,C^j G3<Ą])kwHC=s5oL Jo&﫲P qXY%w{7*`0Z9J4˚|x@3h 0l|9$oӳy#pOtK^q)cO/Z y)5ߠ{WW<ӛ+?ϗ:^/K7X/u<]l x))(KfyE[d379cٻ7%;}>\lV~w# S=䒳p$W!ȫ尧WS sX,u0BX5u0E3YM²cB{ WnDzuҀU3KIT׭r>npOi "U.բ cRʌ?_\p6D㋤'RFRFRǶLrv6\%ׂ#%s*20FǹĺV! R\ )1HZhs[6EO kI[)Bg/1X"ԆZIB;rd ,F B ukBi'I5HW>SӭV׉aWKqcb PBH0!:,kfE췏Ac:sdwĝNkk}<'e[G>3n1aYfEȧe* {ݠK1f#U?AuwGbv?٨f9޼-ٺ=o$duG֔sŀݑ d 1PsJcG_|[HolLc[<$-J$Biׅ>W eޱ"0O"őw&8#,jvzPg\S+RgjI\EL5=Q*Ok|syr(/8•7j%X^g !*..aU @>q4#RvyY.(L(^ʞE :yi~ ac$BrPh(Ajw3 dǣz«I7KR^"S5.~Ixy%Cq=yXf̅Ev4v:q|8L,=gow֝XMe4dS=ᗻW\ZrӃ3r4:aUkBR>B^n=L".TkEaF Gg8 RNvöSUȁW?K}*(t ƂBmPym+_jؐ匤i!.ݻAhP.+0"8AZ] m1VK(SIo!p,)8~qH /0_yx/wyHK"V-]&h{)jm6}Ɉ`ZqPZF89i8| * tٚg Tց"& l#0X"홓 D;yjKI|Z*YJm,O$kɡpjzg)iimϱf:OWdbWyV j̑i;,P첶BWn~jP(JZsȯ3PK uQmh: ENr0[ŐrvT"7ϐ %P۰,3niϘs_3X]xeOWL;GCàW8Ġ_'D첀EB\\NeXD܊PXݺLfߍsK',8ʋ]O1{Z'_}zqAoU05fɷ>*Lk%niתɣYt/j m_sO1J\wC~,چXJ%Zi>k4֗OL-'A4]񺹠2NB,)ˆnӛx`Θ1<7L(Hx: 7SJPIu;~Zk$N>8) 0f#+̕0/vxk%=2$7nDŘ HEp HkI׋O:^\| UAKFPAqܽod xqo!E:FEM`g6L`1sqvTğboئhoD#c&R!`ڜ3bV S@8=,loAy\.ÛEÎq~ÏB2 AH y@Yŕ磃PTnږx5rx1xֿ>C,Ybk@)itIq~;B3"s 3$"&%GM3>RU.'qvs3&/Ksy̬4=.Վ! X  |)hc=)[KZt6RZHK\k'1,JО*֩D:5=F Z ( Wvʘ"3 !"$Rādw!mGKR| SFn-R MyV4؜ue ː=Idnn[ڰ l-}[\ݰ g-};`1ds0c`m1LY%ȅR:\KӰP 4&Z\c I7.;co6;)uxu?|Oglg|1/!77t{˫4 oao;G@Z1i>ܯ9~}6Y.'"Ͻ+]^Bi*-9}U2Ӫ;b .i_ɹ5R m$l/w3T|Mc9"jYY+ή,^u- h36& ME7= xσQwE;'Ch$}\ k&}[׫ʹ{P޾yS^w:ߑ6 17OOIj͹{b|L4%tZ[薭Xr_@,`2c)9uUTРdk+MyͤϷa[Ƿa-`#^{ ,Ģ_K*Kb4)3K`9俈wm=nJ b[%@ػX i&ik)'%%uIvb!=SůdU.- ٟ1潺Ml4 *jEr5\ Tbw1LIY1oB"kl%@U _Aԡv ֭a^J*M+k_!%)*d4A2j 1xkAۂhI~^&P2&`P\*#)c(KVb7,)רn+eL" 9QX‰ۋeZX8Z^i(R+_`Z;)eDkeqe$pHW{trƵ@)F:wù5ηC`  o; )C)y<&0,Dh; g/~u<<zsVvⲁC1dk<+ET yb|Z/gF9| n,polZ^} ~s,d9_|&XJlJvV0PǤ=6!,m  V!N^-ή.E!Q©--IyI!Ժ b1vv&TUS zfڊcY '1b(zjU5:U:WN1c#F)%"FURN*9qgƩrUzToĨ/j X&T!@u*%/ė뫜Ma3o_cVJ +GZ`J)Μ4B *C*H*2!d@Rj`_J)|fK6Ρhj*N<A '. p.yq%Qr5-S ֢1dQ}#E bA,'˩b?d( J\tQri ]PVBc^)||LWeQʉ%V*'B#]I'&iV,d,V.ig| Qࠁ+7T;+GZp\u Oakg In'O60%:ZUlv؊ԩ N1$_eAfzrJ2->fUR#x*g*Χ5nϟ8 1f)a9x|HS /WmG=|4lr[2Rʵ(gD6*KhXe y {=8ZNqsY¶w3w1-Sq>CC߫[rXxk72d`;gQ¨}QI7FјmM@i')F"U @td,''s>>xʾ@1O*;-:y9"V@r8_HD3 qDccL`(PJAq,5-Q)I 옰SMVegS}b I\igC ߂ ( sTݼ;ՄIÈ`;Zxէm`䇵?n,o^"vº看5ьk,xb?>mkQ/SVLJMѷ!?ÌZ}HF%/(EaO.RkDqCHn;3 q!D~@G C˴='kɣXRq{g؂lYON–_wqL)=2ΰ$٧SF1V֨ΰ?(;N 1Ҏ3Lr4s=^H0ّ` )8˯u8M W^H@k2gN#űy%1`mp2) jYs g2+08-s7Òvf]fw?zʽv mN>lwv9>xV14b sc P՗*R AEr#Àt`VOR>)s_᮲ ND(E5S1o*[D>㗟=Qk* ݩvBbN =YNW?|z{%8 iTE -JL!2@ep-)䉛*c15TQ8vA0$^ ] HggqO@w QaNE`1%`?1բB7ڨUiUwş>̮ήrQKRwwڬ*?|#kxb17םϷw?_ľcKwz#+tM):Ӟ[[#ouyvW;0pNպ)߶Qՠ>9ۊg$(p1$b1w݆ <->!pwK!!߸FȔQ<$ b":h %/n=@HCx[XH 79D$=%BTP޴T))81vC@~p 9?3,q -Gw83;wgOn.˳P߿iy{J *{{wsE0 8aZ0Cj! L[վVp8fF2.0K dV[ĩ)eLi%\WeY, NI2y%8$hFpHD%+IebSZ\2*7f@)ܷ N('4a@qBnIQRZ!K-0味Kc۵T a *mZk4h,**%46nh ",b^iX<,f󯾜TB=tGI+Kޣ=*)@HD;mm}_TQMd:UtvK_E$JªUtᖞ) wBv٬fe ?OzfwjqA>wn'ԒN7-VnVceGD]beĥHqAE'xAT$+O1Rai@Ȅq@Zf)HBJw$m, ;J Xt)V2ilrsv7}_b[mѳ4M{ldؙSq繮ŖXJJVh+u 1l@NvkI앃2Aw' .h亯DvFvxڥ&ُ^l<~ѻ5֮pK 3䒲lVs5cOχ:הH:^\"3~Cnlvm4ü 6AhNN|c͖<&dobodѽ<sJZ2 .j_9Uwzgk^@pw5eD{yu|Й"xL6["%GjP$+8E3!+JE_o DN֕ ++יj&8k11j!:v82b%cR) 9Pm@DUBb Ge%d%E)0Jo[p$hsged{K?KJcuBCJ@&m2G$M_)C!!)Y}~Zhl➧p%Β}D/Ih'"aWAaFY#߭E$smötM0{hnB2MĪ%^Nd?}cȮW}MHOXԣ)jZ6_,_c66A;Afݛn&^,]uN t)ɡtYmZ V=j2]j靄{^lRY]u!p^nRdNTа7m{% ,ѧ?}!/1jeء`I:(図A->|FT zvxtba>~AƔd,ɡ0mjvGy:5ڗ.Rn_B<,_ =] 46睟8|QQ~9>X޹%cf Vn4R͏^Op.v~ [=?R{U8P?y2*"yf{Fi UZrUTbeU( \ X4 nب1&p:BJdbk; g vBA<g4{yA֜XpNAlϐudBxOB7k+f~_<7;q3!~3Π-s*7 [ڣi.f[EbNW c˵n'XlFd%yBR~28|l)jBXi"$E޵qB%e}ȑA+,+yQ@m.7$׶`.9p.KjDsYw.ݿ kg+`_m)5,QRcB I [`ł&SncM6te4fb \խz.v:"g'փYØjVrs3Q_"!TKp2PopQi>/~wם˅<'lfS7w{e(xDNz`Ԇ&0nFu1Ӌ:|6ֺrd*IZq(`FcmN"\H R{N9|N ɺ5젚*<-SԄ}^#GY~o̗,$` O;+oqy_N*© #0X6M+ bŚAX3k57pSNHiˍ D3M ġqNSi9ƂX@ňK|LqN& W͢z@P{i'KI4! $Ot.u8 }CmgX䏖ZHeWa_йw (Ϛs5 jAԜKG8(+ U)8Eژ۰̄-B 2VVhɕh3J+USH%"RUMa*A1kDtCBV)YU܀LL:+<蠄 Lr4("A)لzH"y,s pqE! ;pB5'H F'I(L(^Q),Aj?=F&V:Mަ+1+12h,?~XTPܾ}SNW`Ǜ.ADͧ%:~~ӅM69F#6~P 4;7X Tw;#30|NR Ř.9?==<Ķ? >N4gu3}ї.RJ(6d” W%]58vATAĴ%lyAgK( -T0D` jLoT$.! ąKU)@* 3^J uAQG Bg % l2 `%B3Yw[ ø0Ekik o F}2X>Fڕ go>ܠ*M37460)'3,EKuKlL6xp0`p0P`X&\l'((ej1aT߮SV+{د lYY11b]̩8D3_F, 2_s7{); ߦN򛉙EY_Im'Iߣ)n @fSMI` Yo,=Y3iuQO{45k̾[}W{YVc+2:iA)I;z+3a_O"qI1pcU$>}Ŋ}^{u(}mu8E\"p='RW&>u:Ņ.ڹ;#re8$UFA{~:kSG_rXvrD[28Vf/e]jq6j(]3&C@05u㖂S#X>B{j_LkJ0 0i CLׂ:gWEVV '_S%gWA't~NKi0Z?( t$~re$*sJD*\/i>g'`16 ͇Ot: 9ٱ\mC;9fMU@x(u2JK..tQ2!R{l_\k,kdq\:5obÊ6))'TYQQQyA7l719`GĘF)q0/gJ`iUTܠGK|~)N؃XWs4Wgi5;dWRIn&p%6ƭRjEP*j0n2(g:3w3I 3]<Ω~~6eRMHt>0> Y49U1)t}O1Ke:g’p,5fWH,ńXd%,Q}N5$3O/dP'T ŽRd"$: Vqc9 8'l$ŽA{VcZ\˴RT; 'eZ4[ʅNPĎ9aK,A3Ci:֬`;ð$4>o7_4YkV8#} YDr^r 䋔f(]Tnj~R_Qe)\=PP*XIV3T$A;X`( Z2& gd؆ﺝxxi )|q,mԁTO(%L-uNf[t7\((Z p}4 ͨRO9/@c.3`DNG:հ9kC,cr.Ijp jhMr-}{mh5@Q!&>Z6u0T2cTEM # h)&qb")zz42DEԤֽɰ7$D*L` ` ~WrMaރRiO:`!fNF zz _Z_G#sNij`.ML)LFL>MlwO'ͨ3s>ccԑxn 1i ZNNĭe^n+O: rيx+R˳ I͂XSWATT:)G*k푂4)X^CQ / /a0,Ap2c}Җ0M$耝sZ\58>2<_UR:Ds+cWb)>G+rɊ=V4ڇIh&mJi,^@Kޜ^Dg=p' /&B7]=mgK&kK&zxWt쫟P LWK%! SJ>,?S8M׳}+κm1٦zf*3#{uFn'89q SĖ`޳>Ň^nӋ:HԽîpUX(/0M? u2#O>1_Z-0(\yia,ezvzח0˱2XIB^V)zlqq RݪbPDtQF\px*nMUhQVr=Z$tip`K?Xugj??On;<; g_Sa|g?wV~L_:*VD$:j5^]s&i-o1g3!kʜK>8d/w_{ _P\r iЦ18NAcLqNՎt;O]VI*1"+F ]'\^{|Rgm~l\:;`<0P:U"|EH9pCoqd rɎR/])8@1VkYZRePW`ʨfS)1 Jlrh "ojJۿF 5/^P,8)5 a*lJa0kZ|4)?)lJڣoUx\W"}MBY\9Y!~W r1y\j~sj^ `MW wyacgX$ 'Ō#W_FckF)} /}} O$@YQX19eRo KwmMs7A|DYKZ3~cMMKkT3hRrx9Fj^x ,On rZ'\i JaT^sSJ-fMZid:MELL~3 &Yyz'w#;I(7[Q0B)WgLlJh{,-?fe?ph41`k̓8k J䐡!`z?pL BFss6a)5KmBV$VĄ GּKlL6x=X*D"!PMsxA-N:RRR$S dX3`?<]Yo[9+_rNeL ]UTk,,9ۯCɱeU ug#yדmtMKQN(.s ?]6zygESK$9 :?Nw-n_v^OE^/"MK91CXk&5|P_03E8e4rXsT?2Z ͌3 և,y9 .<1eSL2}T~qz\Djۦ/ӊڥKz=unվF/߻Jo>7J񉥰~ߟ>PK,dw~irȬٯ_\8+B9鞃 _'|q(?L_⧌,Pw;S7_-y3lv>x`&k0x-OՃ1ràۘ:+uFPyAS_Fc(ʯ:Di' š .?8T-ϕ9e4W먙윝xS_ߜ?0REKAQhS_f?kרi)K[ (T:ԂN/&YzUI&s+ó,PPƘe3%& 9xj sYx!x%SJfn?B"ѓPrDf8WPYem]zs_\_4a(vߦ~a>Ӵ[WibǤmpW\^Z:i#3' %&(A: E6.z$0GQs9ׄS'K,JZoBcHATO':"x!2)45#l#^T/Ѓ_|1$}D7~M>L />=0k>.ҸU,-6z~6!-53`4\ӆhŨ{BIV3N @D%eQy)Q|җ4Lh1 P i:HzKB`dsdKI G M7ʫӚ+s1JUPRK/N94ꐳ𠵶 P[="g=PWT%b'rPe0:+/$a^Q#!4g˳(7RfV }f0c1b0{)pzYsF))CL<iy .j1דNGLì0 )NE5i>(_{JdxNPї IƐ2r+x&`B.僙^|xd{`Djx@[Rk1Fjp[0O!1fĨcLv1FP-[ DCU8:@Q!qb^KDt7dHt+Fؖ!^g oD%D Yyb s8P45# ^I#['z@\4@|*`^1d;݊Zk 1&#\Q¤WK]ޡ5# ٺM%3O@3 "4)tG}f-$"Ti=!:!Gh6?>{Vn$8Du&u+훋!ȭQh=ܟRʸ4PV7$8Il4a&,B8U3i2%װu*xHRYv*]5#l"^,7&~܄Y~Erb/>i"w}J(I4 _%)ۏ J4fwguFX(󯓗^ݔMs~&,ˋK_X*ZN?}DM,pC~C~ʰDjr^. ]ha!ZCrFyuJMC>`~ .Q8+4O`f\zϥ13Q0v *q!W6-߼؀R{-vاJrkc#5 l'6EGkl[ɈsMl &-w}O3Xml$F[= iX%alB5cQODXvgWKa^+G}QRPyEe G#K=歒tb\9!E΁[J3I\yq`hYPkm&aYT|sMz#jYL8V~FDp@& _qà8:T->1vm S5nal6AÆ<]T2Փ +J .$A}xcIuv|գcdX~(zyɯ]x~9oh?,꽿\CtJoMNg6[''?/oY}ӿFy?7'HN3Jno*O\l_VO6cˬK*O\a<%;4\vwۼcK~bZ-v)?t9;aR"oZ&r`N4fGcAз6/aba[һXwrU2=9_}Ydނy(uCn({黭$ D˖Fחmmw|[ZnQnx/TVEHٺKf1S$QkR BP#8qf-y%񜖞SSRLU| 2 CMДhULV0|M*Gg1u],]t4hZ9jTL~,̶kQU |%It5<\{0g 5S]>f0o| \lBFv {'|x$hs]s"8[)Al,diw =K |x?MX.5ctu?c9j4ϊXW31vEgmi9fޫּ78$G:M-{gXsfƻz[ x`4(hj2xjl_:pDbY dz}a\O9t4Jtdʽ(%<|ҏOB󵥃fc2v'͜Q9Y'{E(2\e p>] ^hxTP}JpC {t[TO!Ah2\EeՋL1,m}ǯ-=*_ Lc;%

P֎ x 1!K?KlzJ~yF1}|u{׷;Άѻ ~YicuIG2qpq’xVt@"]NH˟%[  9@(=% X.$vBgYIؕA]롵M`0_ܚpa $}IւtAl^i[R=Avc1p7iZߕ oܚ1oR8 _Ûdk$ŕ:"6)Oa:Rk!A$l}ٹRf(im(龫5Urݰ?Nؗ=gocdP?S phƑIӉ=lA 1Oa>nFī#zu dZO4(A| rgbk⃥G}@,q%RfWЂxu$aIĈ c6 O_3B ⵂs"ZJ7x6,F]WTЀxI7Ԏ NkgbHhi.+FhA<ͯ Hf -.ZMB2,făs>SP0f:;URu60x5#4 ^IjGke}MIfHb}NFYwuܻ7󧱥YF6ET~X=5sd)J_*$KL.!NyΝ)_rWFS?MYDlnz b֟o?[YÄaNkY{s)Xpu2 A$PR! 9u$Yk3Jd{YVrVNVoI cCrjjDY[RUZ Ec8R TSzgLY'Y/jv,IW+-rqYS]V&e#g 3,GMp$=oR!X\3S4dkuF(ZUoVIۙjWkUTyVTꅠekPy@6Là-k-yv_XOBpP&I`B!ES5V?%Lj^t= `fʺj~w̄VN Q\3JT|35c! ZEҲ)sEZ`hb`Eg]h5Ug P,#F=MЩaI"jj!o; h>7.XhjC'`SfbˠUM`Cz>3iq{~=:?Q}%}fgB׳nl E2g4~Z֊Oe3>⑓nפL8H lj=fgSNUZ3A+:L=hjɲZ^ ZζHL|XfG8ɉK-Rt5晜T)v YZkZI%\*,@0j5}%BqV "RBC*h\|V Gh- O{b5=$p~< BUB 6x{P`iXH",Y5_;/V1Fb T7%F}#sJz@$< ]K/<ZYidC=qA־S%+Rb4퇡b_ ɩei!h{BّI_ iۍrr&:Dxе%<0q7}M@H :!ɵ%llf6jkyECtߡiR#·@#5]HC܅ cr de+S,QeHD]y%hJ2m|\Ugi}ϳ`ٹ&['w/-7M_0H?!ׯ= +*'MN4h?I"[0D싡u5u_˓Go;7OR܎K n)nG=ͭ;=MOw/0w50waugMV*ʃI h-ka&` CP.ݒU觑[d6{Eqvq Q~ ~ ae~7onԷH~YΙlv/Aʁ;M.Y ff)*㋠;H4(&]LA,"}vt|m^sQ!p\y_vNzgCu G?jh^.(>_Tg"Qo=?N !yu;^Yի]\;ǘEJwzޜW;EaA90{k,$4agB;A?0r1ƈދ.Wc|̍ GNpM>Fap>aΒCŘK>&s,j(HF r+ ׇJGRc[)ĨC\EX2P)MͫfW:!jq z=U%d5YNr%!::3@.Ƚ#c!֮oM(Jr7:;)fӘ!Zgx)Lb̠z!3$-P"7 VA~1#<:E'52 ddͶ[zr7iOه*QPrzpx~G΃H =%J:3Ts.3I=6 @@ob"1P4bXhO:6WET2q mHC4͙Rnj/Ob!xy[(yܘV5tP\8yB.\PBϊ ^rDߟV{}s䳵6Եmb^ էO~kL^M|H1eV x-i2nbSnj3ƺp$.~:kB띧$r cH!b) [ּ"NaV`~e0x61>⑓'Cvfm5`Kȝ_Ğ76ZNGq"o-;  /@3˔-PuJtBn.CXz @g1oΔuSGhIeV}j5Z|)bڙƄa(m`(ΝwD-`Lv,I-}5]V1ri͉O5rԨV!!"Qa鄑{LEycڲ0Cr_?=P;|;prfo5<oສWOkίtŽH_IIqp!Q3A_N4=.J#Xϧ֎jWqwN>ׄc3~nEMtu<-@Zc@Zy4aw4ιWu,W_YKƃ>K~!6G:_5{$sZt frpZbM⬷;@r˹HawMa LvH5ub Q];~: kk]~/ a awez`4Yl<{n3ȗ0L94ZdCc본*~$leܘשsR*1?R~9̑$SWoIŃ>'FZg:/7" #m@_`!I6 >qswfTП۠ Y:uw~6Fq}-È2_Ë+-coY9n%zx,+ UX9iT<oôu0Զ&o$=iO\Dޕ5q$2{aYeڡX;V > FUW02H3j wg DU_gwc) *+VhjYmĨuN/\"F?gـ+O h4;lc,S0;d^VK?mhY.ʉ׼B ^aU(U.}D@Q5V1rjD* 6Bଞz|׋׿|e k>[dozƛSB2/DQ݀un02tL; ) Ibgmz2r'MKؔ)3D\g.SBp*9+MB&bJ€UvAS$O|~?d)%=Eg'/#9_'R/Lo>oKΕCrnӒM yDvq-(!FByk1< }$ܚ.M L 9vp;2KOɈUinM}%*%7.q66)ٿ[,dj8i߰=BG<غ$zwj -\XC^^?~bl!floP.7Eq)wR3*ᜩAʼnV:K亲Rɫ[Q?מ$_-75> ͺ~rO$V3 5n/k9[_GYf 㭃\kIGFIu>}m^JA{!Q6eM9::lm熠O+-եЪNǃS Y+i5( ƂHer-bc\0aRvUQ);;vUBP9N;Q0Q TZ}*j}bθPhUmKp>9 \JP2]C=R|@ל07 +^]'O/5"K­d: ܚJPG)9&XX`XgPR $wX'[ Ԥ4LT3O5Fݒ Qm~(KĉC$qqlhDmNGpr8#Z_x4ї* )?~ubi|=@K94AwK.LT,evI4q `;ŵ &ݧ`WvU Fx}Z=]rxk+ֽt娧%z2H߶#iۦ] ҌZ?vM0ijC~5"}'7Oo{ip*}#FO JF udtAeڡQYLɻ-h \9~z+=g&QXbuP p;¨e` x^X.atǿϪ4|o/|vn-NF7`eiZ/#de-F~u/nַwnV8E>ghť DЋziFWgP:'Dk_Rߧm4GYN5.uMt(B ܋uqJOןaMoײ+MMJm.X헡'4x7;C_ʚ |me\`sV'ƚk8L%0&Β4/=yo|xB BJ  IԚ츦)=zI~jL0wtBxo$>˱P !qNAF`^dL#K/*`缙F#,i_tP1 < =`v2Dm$")F{+wSb9c BU@ S%xKbYEDqpeyIyN+b#(>&(/ު C, +iJ%(J%x^%,J)4))]*i@11Tr`)bTG_-1Ei?1҉E@qf QZseX*)k.C\цk%OX=xU~".$]H@Oot%^pbdžBItR*Тޗ 3&/Qx{fsk&:I|O ?nx8 |s2+pQ8؁3vo,78Y1R[VjT-c0|]ZMJ`ޯS~zO{gZ3mC"W=6 Փ7&lrѼu0mJxݓ6yo=I?y=?a8>䓁Gq1zNfxTH er΃;S}FGJ)Xޫ)ύ>?gKD-{G3:̄9(׫:x 9+<JZ7eID%<#-S-fPFÞUnz|G$L;6r 5 ֪,$Wҗ!XC˵#EV[,RkeԌDړnfpM,S +%8&LU>Kj(+d`(&$g҃g:<BUL@?lfU-`#;fy{Usgqnśvzl?&z-iz򉼏 Q2cO7mQkO.w _S(oA|eq &=U @-BABY x"!* OU 3s8m hm2%{ =LŦl-OR9xǟ}ºousZ3n.#Zo/߼L ^je Kt[Cz.n_v1r{V2;ý GM%<e+MBJo k )M7?0kv}e/xF撳u( kʤqF;:Eǔs{3@?#DaJ>Īk잫 *M \5UT*0 BT2Pe#arU'H\PH;:_7|B 835\.'Ds/¦H zfar#@ҰЄ.&&C3EJf6=3bAyĐ|Pyz{V lВN=(߈AyGfAudtaϵC3 x׉JNɻ-\ ~=psW wﳌZjF_/+`pψ< f]ԯpjޘj7)wː*Kҿۗ^i'幬nڱNKC~zm 0ÕÂFV%6O}Ld:gDW%U%p.Ni6/:8G ـt&>8itf3i CӲ,qy֓`D39reiH#Y7%0D)nHJBSuL$ٚJ [*8!Sȯ=ڡZc@Ue@X_i f*-5eU3~H]CYC..)T9p:.sW ++T]  "HZ8HFShcNǤ+^* 1=m)_fw>6Ja^1Sţk5S/g4QTBD7goecޣ/} #{_$48Pd ~l/Ć5}F?isAZ~}O-~AT hۀƍJXF _ՅEjSylTh 5?Vu}Ju` pFV8X`ޔ r.9ՄUXpw ij3g; /븈pSBwfCzH8spt66uXȴZMD)rMSEm,{dQݒp'ؒI*hI\ }VFWP@BKmep&6˹9azWN"rnoے5 :qT RAu̼?*ubviWW11*f#Yݧ(c|{<B}x6x1%$y8PBxK΃uԲh:G鋌Ro1ל(J?/V7˸S^ߔKʸ6ERqZSu_[]E[Z8m0LOo~&퇸II}F7EMy=PktTu!;-~/Gf7)xKCN ڨ7SFZ,5 8ɀWNWXQDiphPR{0Fp0bJ78;ͅ:\kE'+Nj80Xަgoky6Oo)Ҋ-Qéb1z𑘧rP:xny*OҞϗeyWs*&!ʶ`VcJ)pr}0Mq%\A:kE9 ZDHX>aQ\ݬևWR~۽ea +8=pdFns};Ȯ;ב7P:sL^]0 :*Bƞ#dHv2edv|֗\y)PMe 8IAJWW`/@;U]"0SJh1M~[3)-45StS8pd]hFT}h' ]'9I3IK:Ux-><>YÎ8 מC'G"9Vdw9g#bo9gtGHg9gL)%Bdj)a?~267qfpzca~uSo.fڋOĉ7 dN@iڥ fU%VR˹QFX󕫌qTH%4R<3?{WGn/0cX|)iKFmʒ"'A3#5/rf4,v-mIS."oC$U;Si'Bm5+hV6lj)Z$UR I(]|.|[)3\B\f<r%oh>-sYO-7[+7LB+"x9BDPxw81c6>1assКL]WHjW}zT~5k͖ݎثI_KG>ąHmF__u[aZt5̰լj[qlc?6 _?@wLYܜLsk,RRn2n/{?@Snɟ?Tzh:~fa@ s.Fs3} җ8)FVԸ5jxgk 2SeEVTդmgMm[%NF@NĞߍPaX0$B^,[EePTʎCW1EQVs&L6e/]mFH'`S5]Tnuf L@3B &fdvm*˹DX:.x'BZwM%Sl$:L-9\3P 4a|iS=?Tl\+ݫx!S>^̈́MkC)VD(5I3š( -PO ʡ94SFqs~*iP,'R 8ӠR2;SQ''R8YHs.d D:Gɐ Źt1+Xa?~=.B˄<awסf,Hm^Q`n7y4WXd|E,u`KYg&$iX>&}ņ&` ]Ԋ;q&Q7o3_dipjL[+m2 ۘcЁwSY 5J)6Ν4rWwXB!C,m/gK\2cћGV(=^[A Xq6项4 "z|og #pYXl'sgg1\c 4$ɱĝb(|s>pifF7S5l?Hw$ٜ)d4nq]GUv3(5YSth"_B0CC}F<~~o B[w͌7gV*ȴ]Kll{*du/`lҬ3A(~w&u¯" DmԷq'DMU!M FVBƴV{`F(!UR!E` {rŞs^[s:r۹]Wם) +gjָi 6A]iUҖm#hwZ8Qh$C~W ڑ̽9qѯ~qpL:IxBt!wDmƢNzFiʼnPa~"G]$#6k3xMB08'o'ZH #ߋDLW(3^ѧkt"ѳFpUri廬G7?_?^~Y9uO_]Cy>c',^1y+qs°Kx-07X,9׷`>fRYxg͹?]woUjk\?'>l^o/~܁/:SSV+ /&en2AYG+2f( G>.K'Ll=(WN4U4I'lY7[ BT'u615P̺w n94U4Ew!N DuRLjn#Y䅆Zr*S@Nڃ:7<(d9NqFャ *͑܂%Pakʔb򚐰X+0 Eڙ&O-8?6%.0jx&kɊc@D?EE^JQ>xz޷zzEÂM[M#Kz_Ejp D3wCd^xݣ5/hRr{|z8rOC8 ٓ41x¡Z g@,ixRRs!|J~+Npװ~X)Rxq,S~#a/{ʇE7i?jڶYyZS5uk >|ѱ"g?n~YTW_ goշ?egEeX%ӷG짯t\=i/~N/?_]ֹLou@^h $%R*!cAb{Ev2 %e;`j2 Q/l$Ajshڧ O[O{~R>9∄Lįp53ƉH:ACYRBCMuwf_O$R8gm=mbfl)*UL-z WCtb$ lP zȡT 3s5;ZB6mUL#;KZ6nWc5k-+N 5ʦgMs{8QJ_>&Y$lTI%8SWLaGB)PU-JA4]k[@300kƎGpi0/^w(p*^s^htVZ # 郎[0kӿX.*4>^+5>È<2!N#}<1ÞАW$2|_-&4z DuRLjny_ScY䅆Zr*S=m$[ BT'u6)½\n n94UtC0PpLU$2EKPM Ȱ:&P8E+n"{zW`` zzvNz84Ig5\I:B= ~xOri Ii"H %S)C>%'u~sE:Vpm;9aiC/aOi=L'QfЎɫKR^7Pþ|WW36_t S/oza ,%ezm㸟$)Q,AF +[D"a P_Hz!cݽ/"!ѣ K#kx1/@/ֱaxuVELR=γsG&(,ǺEx;$ u.|9U m%3{8Mx;[N_kwigM5BwSWӸЉI?<UΡr8|ؽoSEG{ c:+oDqgoofLp;0:Q%qM[csU% ~%F9Q5?;ژÙT onȁOSJrCEx's ynlJ;jO CRlޟ.H<)"mAN?{p^  ]{ځf 8fF0?;baU0uV8ͳ$蜸c)rΩ4%OC ޵57n#뿢l" kѦNUR$y)^زWg@2%zT%E C=$ o_!+\z)gU31_QT8y=vw8%p$vFD2alm@3A>ڀ񖀪p-l3) m{@GzBPtᓇqCVzBxPjBD>dsṁ\`x;.t 'm n1w&HW0  DHhJ=جb;*%/3%THD!EL(H"ǜk.4pRc2g)$Mͫbu a)6HXYD!*JHL #SDK\e: Q5(Vc[ myG[鏲L:EQXN#ڴSh`h.ןO!ղhj:̋n3y%gۯW|X>0lw} *on/ q@Ь.F7_ly}=1Je<(F/w+8lv'2` Ob%zH߹7RFjlWf&fi:Hay m;Qga2#M<:RP3TjH=}!7Fї4eY`cFba8*[\TǡBKFtRD7aanK 8XiqnG1Q\ENRRō,kq:,Jc!RLӈGYƢ$B,TJ-r)cJU29I=ݖZ K088qRN&3Y9ؐjt^xlc\loC'T`pUnc#,CIhvGwuEscfoq.n8VYCNĈO+ ˋ7śYf7o\Zxz8d.&H^)YfIAzfWR >.;m>7c (sf\R]&*ʿ MVi-+7"LA N *.Xozlym(en&zEOgMa%#}żi?UaHOC'SxzB"(o pq,H սीaQDi 27H78h Cl=4Rd=o^GEw_s HU{|Pﳤ^D:4,q>˦ʎM; vֺl$P;ne [ &wːljH:g{-`ӝ#[s @AZChcAR\8=0= d_A>b^-eC~8l` E٠ounzIi`>h0w99\sfXVRF_t D5vAw:0 j>^ۋ#q@Gi{t?].|4KF3FY=\?}h@ ԏͺLւ2ϘU7GM^u$IcsUS%XKayBUO{z:Ƽᠧކ-[l[IJL7̽*Ma{U76罵U;y1ƞܯZGL&yMjmsjdrx>VOcKOf=jYA?gֽ_5霬0O νo2Nwo]̟ LGRhQaJ?FSɴπnm=O3 fMFH/Mi\X}n4Ͽό-Os+H:+@="@B-}/]ޘl7BY%N di1ʕ?}勻]lg/\M4z`L{;[$?syPUh7Yɽ_>fH$S*HhݍY-X*7N [um QsyZWۄ%7V#w^=[֠ [0T41gY8JgVm:v_vS$c A'FWb;Lwy }/ ԕ>Bnot%9aJ;GT]A׊$RXN$jXҼG?a 'Ŋ_GGܰ $R\~p>+*ذ`l j(u}q$kaU /&ڑ-Щv싞x%vrso/ɭܛm.#})xnD`4i^H@3ϑS ,xLhHHfśDm{Tw>7! KZQsI"E!1mgDB3c\S]ȄQ2;>*4QF`1?CyLJEd3%&.^fϫ:ƫGc94Uv?{j&BQZ>ӷO_ТSˢEV| 7*Ƅ~[b~Fzu1g onfOcBp~Y{t{-&܎iV7hU1F( ᦽyM[c/[!܌P/ڣ.5Ur1=$]wW (#ŖRG+R) :9Sp#TFenJdG19yL3|T'4w%3 (BsF4ROdΗ*eo$6gsXcK۬8w{&Ū~i?\YiXbgxj-P.Lb;|orȊQjXl˝Mc@yGͥn7.P{oHe-AY)k\anKVJRi.~JI=ݖr7Xi})*pO,BiI12"!YtE嘢YSefќXKeb-vZ۱%*e9I=ݖx%[}[ܬ ބN9$t[jJ*u+2G+eW),)“h ?XqJ6[r[]Hjuly(sfl+.Izڼv ;.(ѴKCOx"YYn([(C+b*焅YE 5.1Ze8{!YNwuʧWdnkîuJ1Z;RQ#l=8"'!鉰;`EWVv1^wqѵbZ#! t[UjM;D! =7x Ƽ_-6,7s1oB(v~oM] ╽qt*+8\ *Z|AFFEg3?p=M˚Ǒ%ס#^MҴR'4`ƅj2^H< mfO7nd샥ȧX[wXG; _ d +7zQe_kE=#Nͤ:+f]yJjZwk`2%zlP5x c 52Z\eo^p9iXk 00%xX P0/weyq "epت 9y|X1SJǠ=D!@[{>#ez{$Q c'D()n/YA'63]l܅MUȘR"KH'i,ygwr_)f[4 -\?<"9^XpF&tm-Hw}U.Pxw=}  "bS+rF8y˧WnîѥFC1&uGm-Ρv9j=%uP?uay qp;,/r66] J;ꔮ`Ɖ=RHxw9_k;8a#&jz5%;c̬='sk\K4=L@I_Za<ar ?Zf7$ ApjH993+$c[gpWr.|UCuX뷎Jj蹹#J4!;Mo&ֹد w%8hb-YM i-ORef2Mc$2ȭs$crBQ^JI@TeȈ(Kv$2Q&D& ܼR1*5U<б],06PsYXiW·da-6H&)W H CZ(dB‹G U"4 ;ړF6`,kS)ԻQSꖯs疃ld㎙@ l=d'u3EaRkb^[)E7+u5gVJJ )TṼJQY)\KQYi!i`mLY)v]* JI=ݖ8VfuJ[Ro+f@ _J0+=$t[j0-Vk+J]!VNQiTYi!5ZVk+EfX,rZM&99C'qt20]ks6+tw)~Ѽ~gvtߝN3M/_ʲ+QI udJEm&mY$.98ΣeqK`),S@ ˡ0JU t'Ce@ Xe?xхwj]ߚ\}3тPœⓑ~!c"˲+w.bC@gaR2(Lq?H^z|i^Z\]q|dm( c/jPvts# 5 U$pw k4GTw8 9 Ky۔uCF^i ,i~=lpw̟i%_e%nQʔp0{ܽrD-̿G^Y[R[=|(𑶴 Z^55Ft7FCDizʹ^s7EIh냓xNC'H\7ĉ Dҙ؋6ȴ\JDRv^eĀ3u+y~;19 @EoI"ޖk,OL{wvH G "5|s'D>i \yl !Y 8/={r=E2ϗr7ևۍ͘d해7HI47Pa;V-%zҊҒϣV;J4Q)4[e7!\E{ JSo b@zIH9-( J"Nvg (oфl4]Ma ;~pD%Gy 1Of{RBkHkxxL/'OJdnkAU hP@zW6&A8Fˎju{zS>w=65 #fvįUԏ52V9|{Gc\4cB Ot޸_l7jS*“D >u RuBs9OBppEz䅃gua9A}&`h)!.! f3l/8@kغ"k9 fU&6 k鏠9&݁Ve] E6p 6B'O'su uq dCT!b!X9ÌO 0\RL%L|!Dap.iWgxk29` (TjnӎI6X-ri䇿 5[ d>yi,C `8N, QƲHbpiS&@'ӟe1s$1d$ V4 ʣR԰ڴ|uTZP>L֕?GO{{M. qyAw*]9[S/zKu&t*›"Ey껽fBffiw7瓍!AFbʑBYCd) b$Lʘ4QK~ڧf򤌿0ym)_4ZԵ|59f|Q,xq3 ]֯zͷ2oKbXo7P*)D%WsU|r?gb&+пJ["_An9jQNiy پ0Јh"9Z2G̓p*xX l2+C"QSiԫrֺеt ե %{jRBdݴvKd&,W[mvk;?Jf%IqZq5L7+: ?T׶i|} NE?W M'L7Кk,bC6BH>n?#XmɧpK5_Ms4IQwPETf⊥և\h7 0wm*mYJxY$AwˎTBSr9l2V[܈qD !1HpBQ23"!2@DCƂ4Tch'T GRB0>[y6wS}i*vv-$l)L\bY/{2z^_~w/ͦ]]d\)=}- {;nm!vfcHYw3D1tDƸCz!כ!JOD;#{1+c t.ޡ[4?m %J$c3,Ӑ@W={@q ȍKslVMUIHzߨD;.*J.}*H:y ?U Q +qnW)qQJ]j*wԏ^$/IA&u?ֺ&ه0#5s]whHp2TmګG8!TLl1X>'u1tmvĈ׶BbE5>!K\ӣ6-pw9b=O{Gx%@h!|}6 TEBx0\rbWݭ3$]9mӎLݝN_Liw>na:^`=n<V!Io" _ :D6)DNΝ4Bx= aD:Dih Κˊ;80/';^l8q(m ~L#2_P9,P&ELc,1" AJ9D iźH$1G\xD?KI/ p[^6E/p=qsG2L;b<"05š$Hr$1zпH~yb+H0oo] {6_?l(XD]ZUUzEH>%d߾7j)[տK]1pNZZsNM+M VQ(b"X`pFJi*Oן޵G'6ir;I|[FVyrٙ{T8ulY,π04ABV yw! t`NHC,)fc[4$5n-,; FQE7Ҳr0 6%#R&8ǔ@Ha, b@F\ ``TQpFFCuStdkקl%SYZ6qʧX !(ZV$&1I$"("X2ǂ ,egI;R02A ($J9G)KI@iD4 NY$TFX^Z)aDeԱ-k񃶬Dݖ9!e# @C@0ԲH@dR)kÆ(cTH,4e2!!0OFk("I%'ʢAm<6LZCp1fsFmzsɩ }R!WycnO40}b$Rx(ҍB2]Ѕ恚E>H z/5])񖋛NEsC"ɟ-b킃[n{Fr+޴jM|h2-Yذk78c.ՇR(TwjwjH\02{zǫ^}nLoS$Q^ (wY t V`Wa5*ts2U*ubBJIjfZP^m #<ݷ6w8EZtL ]X<\oCaTObNыm_MwW"ܮ\awʩ6ڌG8!IS*fz p*OZj] 6|][!x$=rMynn9+#G0tttcp`{;]0.w0!*}eV!n0b:?2 !WX c:> 'PJ|dX48KBTI}w4L^:1tNNk L|꺍֖!U/z#צRӵb:Ic/x~k;5}k4hmUW$CaIȸ/BH@!QRs KcF))( eDA?޼/p?a;LAۊ"@,_j+MyQgߩJƏL֕cNh⻁] hu"q$USYl BGd"J%JxsnC m[m![gм/6'YL<`L/St- wj'ޱ C@Bd?~mCGSدXSb>T-4‚!ʝh{ګ碚r,y)SyX9|mE3HMRw oHd$.:y}n> RR7e65B^O Zțk%K &3gjݻ;'@0now!~/O] v@a-03ȄTt2\ad P(#B\(im\o?F$^U}mrmS}5#$u\9.gWcNTw5Zb%4+J޵6"e/_ e'Ads 4 ؖ"s2'դlQ(uIu0pzXaX0` dHBoy &wtׯ:q5pq9!z*$tz [0mS96`ì865ēKnT ,.GS1{rXEE.@dGm`HJ_v@v.7'vժS,&@5B+LTa9=%4YOlrn&pSucշNz O,>J\r4iO{|S>=)D9H.;D12][ #j nusOXz3hQ[~`MgS tn|ȱtvK@F/>Q*7l^/iݼ)kOZjvRSNJ9Ө>r".9YX5kgtʭVJ#ɱν{R+5=EAE&vzf"-(bYTdfŸcV!;XKm@ol'˳Y?h?LKiHyk c^Y]qOi=cʯf]<)B>e_ew Ձ~ǻ/V⮾[8wB>C|wtBιգ[xGօ|pٔ%zJ Cκu5uxCC /u,֣rJk,IbK0KҜa,rR%Pxq>OHL3qZH4.XŬy̳J2/K_p {PeǠN(dTGs4Qk)M8xO M q}n8EQgjvҞ*j5Tuk @XP5tOJV*UxP|QiunX^ @g ԫ Q.@v**@e J1'ap Rٸ9fi?`y짹q'!V]^E)vNn'K#QF)$2JQΌ1)&Ys|*U._8RQu67T7 \jբ .m6Pņ\+F@UtM:u_وt:J*GdQ--uk+]FIw3|Mr貑׶MGW|`MC4w:(&\if1"H*'Ir$ 'Td p.Yvf=D2lͻ\KCR I|Ul}t*rX{,)Rq6g%P<:Di: z)CdM/vbqu+6Nv~K;a2}y.k/ =4ccac`=idOdΙ⸕ hjZ핖q3ku ڻH4զ}3!'O\`>&>ܶGxcT-L񑳆ЍPͲlNgn+ՆY=RJsr??%pl7w_Y˧7btB H9& " ^GcH(t>ѶkR<`4M9lΦs ?H9E vOVH.W]{(Ӌ!0xS6#ZDGHϝXsEE?DK4c -h/{R 9 6P)~Ç|`*1+<rB>#_h-T. v 0%t0^ݺnA6Eq{76@8a~Ë?Wkpջ+WkMfSѦ,W+ 1W[Wr f9z*++#멧iPz5)uĂjR1'SGdy=wLuY"<΂{XQ kC8$Ml(vNP ba9&t8++ܣU.CcE_M\:Dܫi4i2G!@QQƘ@4IJo6iR\0H0P'/ 054&\.( N@DW懌 hlyY44)GTVjz U 탈HmrrdhʰP2ss9`~@?X@WmuB4N0"e9R91$$YL#gYI HLI+c臂C?nY9]!.g[4{SBB^qr.i%rI5:!dhG/8 -eG/:7wjeʴYd5))hFHN ~+$KM(6R p^ VtT{ .ld pn0+͕e |fֺ׺,g6A9Οaޓqp:hՏ$\ MD^*Gu' / {Zd1,8H2ZܝU: *$4:8LJ2'(i" qݶ*F出%M]Lʑ1tocw(M$H[z06ŊiuPnmL`@) b06&ƈ\TSBƒ#ƄW{'%7R)y[߃v ]YZwh Q:u? @6j}vqv^6!F''dQRa(ZqJSfRx:9ٛQj1eD媳*%z438d\|+~o}(<Ϝ~v[Ӛuv}NRIXsaaoڳ,KZc: Vf@m[>GOLOMa^-Ur2/&zJɵʯdzǗ_֋ož9E?S`'e/7wN8YI3No"~Y#bay̳Jo=V%O츊c8'߼*tPeǠ&1NR1eLGqQ34al87Ŭ^ 27C@E 74\+0$ƕ:x^ZĈD0QmgyVV@`T|q3e,4f!1- M9M.heQ(7)&<(6P%t+d&nw3 a7U 7,C0P*xƱS:!r.L[ߝS_-|7@yY3Y<[,OZ.KW'oV;G;ݾmef=sM<' ^ ؛f#p?4ursk= 2bYR7xC w.<^pB4*n 8p)ls8yèa H<7H E4%YTP۱*ÖJ sa_ÖwJsthnI y*ʲMs]X+;oUjq#7E/$',nXؓCp[cK60IX#o&GvkfZ_kK]X,U39r VSy'"aM+dz^B )gu s4Y}cÙLj@[xMNz$6Ry#ŧ)[g.2M'ElLA@)sH31 Yl!-h¼(Xઐvўt JtjfE8SA/"於-BaaV#af) m; 5L:0@37]418kxc'JaS8?tg^WpD k4ĬvH%qbe(%N"}"&hűt1Uh'0dgA;uY[!&EK8J"1DbbudXebXM4L b C 5@5:( 0L8R'& (4!PāBa9IKcLB 5@5 P,=Yqo_Hβ/,JE5R* ^۸c:d(_˜d18xR2AD)$.RD$Z[2"c Tu/-Q~_D?iwx&?U. ɖf%iw6Qm2p?J 14H8 b"Uca; ̳j''5/1U%Xw#FݘJSAkT)EōZfewW~3d([fXD qՋԷ/v^t q&=.+nf@8g/e;u|Zg#m^|p}}緩c/AVFŕ>˱|p44O\nl}=/WoMֳ/3^ލteil$}7gN:v V iGy+03Ҋk7HЄ*Z܍銞!t I?1Z^ i̠Y:T9 ' s>;"+,:0=qu` SupRRwS%s Ow=`9D9BV8K'B'S([ͫ梼Bjd)5(I(dbIEbGbt ])CZ6uHw&0 oTڇե&x,rM2RLhy"Ge,RaY;\6zS{\G- `+a1c%1Q%Q0Fc~{5lDg:0KUa[>bΑ{~u8Qr*}25Cv[j۽V5%۵nwZՔoS?֦&)R5ُV͓tzٚrWEFIf Ƶi5#J+a_Y 1¸FۈlR59v%$*֨=x6a[?wrj(SU$fy[T!?{w'$ls!|M9C ~Bp^Ow9n}燵701) K/a'5Cy͓*k2Q$e@ iL!" L.,[uPw!R%ZD2FpKZ[mTL8'66FS8uTm>M/?~F>/S%|w;l|&kBɮLG~`A[K[}+{b5ve_ez\|`\-W8,dyj cy6Oѝ]/#mߜz)ۮףLz+;Zܧlė4]^yALP/\]4Rm?q̭N:)W6_{83ᦎEYrv֛գ[vF؟~!s4E,5aXH 5J&߰@,rC S"0|O ^XVuwN\͞Xkt럛PN%6)t91ʚ/Qd ?~-JhFg/FNuOZvV1M'[4RA5FJAu>s{y\.B' cCM4r={B]8j䢇\Ď̹2VIǙRg:2}*K=ƚMlD5P3~#~xC߽uKrŞmU_ "AxOr4wtj B D[7P> yU-2x[;T͎ݟҺ`H@W1s@ -y`PU^;B[ +&H!THës*bM;iJ52B"ao 6FH! "PT 8O%ʪmr$^[A;uQ[Q kS8s0BcF+01ԟ$X,1PKtU4cJ_\ OKS5sG XAl- qsd]!p~>DFr;~Tr@wb!qaµHSEf:61Y4r4Z$&rZF qR!wnl`V n1d"^q宸}síUj¤RA7cd1A@" Dz} n7Jplr5J B K-qTaDv'@n_۔\9H,aie;8> '~xXaWtƙN$Gbl@s&v_9vtub8/Gx]5, ĊzS{v- Ċy(G VX0V<?7D `8 . !^}nh@|(>'R6-'xB?{WGOI3&E(d!%8 p%>ZXR$ٷk`{șܚ~n%aHVd {+ųW"*airZĚFGǫ #Opwz~ؒarpyG[{2~bӁ1nt`zO -Sev=F+ `P7i}˽~zt(i {{8j2ܕ4[\+з=z2͔cEd;~OS \x=!qoOe~J.5f@t/dma9t$;.ːZj!;n&&ƄػܑZ7ƞOzV4[7oͧ ڊֆ?R+r?W=͗j sP6K['A@77/nկ|t0x}J[>6^Mߙ/>o֎|~g9IYO9qfaQ 19z'ql\0 :Hn]>+h2CGBN9GD7hv"0 :Hn.yѢ[BN1Yx,vj|X說["՟FpXu[<|Unwaen-! R$XqŗW5ܧ@EJꓘSZkw(<0w&Jq-eYNi\)U=xSTЬM8!ݨ}>鬃bLoK^-8x-+dGQ:NH/+Xέѕ-}hSde1y%exywҬ\BK^;|."ALޢZoШPb>x7vtSW&Z𴀧9 30RJs4I>1%6Ez g!z/~c`u2W)**^q Fז; kΊZ)8VZimxXpVT lWO3GNeKlbr5ϱ@y6cj.BJ U( ,Q`i,EJ^ZJ?qی"2 d$ǩFJ! @%S,ʂ)[1SPb]hűpVemץ(ǎјdu4f[sFQ%CM-&G750FV`Zpe譪AhUQ ˺1#f{v?bQw?_/AJ[ek~/ۤ~lHmD*#D*4;򮕚Ǒ= '>o2HX2ラĶ2I a`Bb$Zӂ ^3U + FB?rіS nPtl}ʓW3=!WO•ը4+0ҢPEub}.e]ʺ6PvگA%!|RJ@|bબ 'WIfb1vwx77bhZ;{kqH3?(c%̚2t\(9.i+9Kif KãrxK{ЎfZ?=#FJAZ2|f/tiIyqQ~/ֿX߇>!yYƹpϵ7*|D v}EuWп0D<^9x;#R Z~߆بzc&1Hᘙ.¨9f,3} !)55vtC%" :HnӭOѢ[ѭ8D0E_D7f6N7Rۄ-4[BN1>Jwu0 -)/RhP~DѡdC}W߯%{YyQ>^|~O}.aw΋Zuc]nҦ3ŻwiCUUQzgϟx +/]܇п-ʛ3,x@/95!|A?Jk<^̢( [JEQ;{B!]􅾀(_րgoO|6B|\9bxnɘRs`7-dZ|moaY/e%ֲC'#8MCĖ.1p tWa%4Qa8)qnqҏ1eN&mqYcZ..C[hYrپ5mjܢyK` 6'.טG9YZkZdB_߆NgfB9zD͇}.}+B;z#W PB L6C'xD {Ǹ 8xAN=K`j 1APέGnllE֨s2D)jj [Ǟ_z2K޲{ 1pZJl`$f˺fua S kƊjQ*wyfIXNGbl%5TeE)FvHR3LIrۊYQ%p9e+L 9;x 2Tl*Ƣ@&Ж\EMŠ*]yB)FxXjsuWXůT,W!x,WT\ TJʕ/ω9q]=Զ6LsqTk?j %HCڏ'p>s![LeXKkIz\DɎ\H; CO*58OnI ^ɨ+;BJTFfRVءh1ȍ÷ž|Љ1Q>A*!#9Q#sՖj2rsKN\FLQXV+VʯXiKfZ[$#ie AUq{/wajsYWa3D3.l3{\1@åy^ǍbWt]p?ఙ8t}j~}߮]q1Dk?9N JG?3?.ؓe䀈"X)|u{}^U(Yu}^U wUo6ppvRz doŴHJa=슂K|EH@Y/%TFJP&2+WLZTU9uc$u;ϒ%h6K,I,8#*[ڜlBm4f{$U5IR Nj#K歩0Ei,0}CԊxrc6q춹6F >Ei46?md!',LuL<ݞmLLrmoD}D -_n} !)d8w[~k~#%M0Zי B{dt!'-LA$` Q;:L󆽝 Gnpق04oU]5흏C^n `d*m>'ә#Ѩ % h6a-^%&lLO|q drb ORy0Toq:򴄇J1.s3Z I鑧 \ k\ϩ\x4l䋒W1̔5*KZ# dvP2\gi&DT+yr^dsȵ96ġ(BVeŔfZW jθB+uRVʂȒ+v("(љK}(Ēi ds"zwD~aZGw;G3Np@;OBB<]fLzx4$ofCyZ 8ChӣCy=q&x0f29C@)83k4X iE9j2qpug| PmkX{M${ }N%:2ff(֑g04ԡƤočy&ΦD^?=܍nZ" jbFR6*q9^}i.6w!'-La$pP̙|33 -Y|YӨ/^#WRCB^eedؗaKɮ- zFL@VRsoUnjRsJT_ӡԗRMQw1<͌JI}#o3qtJAsJI}-5WPz(ERd`(Eҕ\[,=nJCܖDj GRyXLsK4SGWzNkq[ 0&Uv(yQm jQE ZoDiet%vBSeMMP#0NtqH=KZ;tqTSGBy-]b3&i4?PE{/@M2/_v}}Sk (uߝ(Q (!ԋM /e?O}I`]ksFv+,}I%@}}UV\q6rјZ#qIʞqjvnEj] >舩E~pjT2tDGE%(rde&Ix>f cMX&EM<>պ)Ÿg$['sUa?~1㢧&GqLcYQWӚhL讓LtEb-4 Lpm<=( gs Q*/r<+b;hh(x>O)iugY TL\%(Q2hè&a,\Jk2 *-;6Ѳ5yPEʒ4C  p!0I:h.84HfjHq6hKjnk*3 ] mۥmhJvp6.٣.yUzp^E{v3\0Ut},' H$ufrVo vMڵ\3򬞊8D^;nE8NFJhJSX JgA'iT`J#.S- Pׂy®TY)<ʓz1rY}ߴ+V5nzdݔsw%~߬^=y5z) 5g+C$0yFS7;r+`(/T:#B˔H "kX=c*5@E-57ANmۖ҆x 1k5Y" &Ѻ $wlZJ{Jf^u.'oɬUom.'qDM}rxoדN߭KGV5Y=>dNLoWa\[IJr䇛W66xXk'kFϏgS?BK![rTT`?ֳ\a_r=ٹk9n?s_C-e7Y>>lu@jס.fgl熂6r׼pjnqよc^ރݚ,Ң7w飗Z"bQUDz$!gsa-`5e6wWQ+aPTJWzn$xt=GI"HYb$"K iŜX9ɘTLMVȤ(Ւa>֔jcKo`Y/ziO[ʯU}Z>r[zߴT./WVkm3-UGehi-i?n]4?y+q~$P\АJŘ` @ֶۡ`>/_|NyIY",ҔDqn 5rx?nK>ojIAL<|=7i= Q BWFE΁w\7QJIuOS39Rt M&c=ݩW@]'g.=HO/]Bsc̙XAb3ف75/Us$ Sb"g$DBΘnCۤh -s"N!e4A.wO򵽱IPJRv} d K-)&@Ӵ *5yXƧr+Z ߳x'J~3:I&(a s0 ;O 108 zw8 Ni}poOXzshE|t~ѹ<Mnu!Gt{zvq :S&5[ħΔ3nrBFF 0rxpwOӇXjZ♊CFϷZ j.3Z)U) ͊fD].2 \쿞^&*Ϟ*GkPm1pU}ᅴϖhZwYsz?M;  Z$OnnMv[yw³Zs!?w\8_||ŧߚ;³G/ЁYj拻$_(Z ^]Ztm36-/wmHk1I JjeƢيqIU;e̱ky-Ɏup'yLyb\ q 4a)%IĘҌ(Ri~G=26.D2v:^ nZ2)[Eсw4pq9bP)gN#z vG}U4'~t45or 9APuМk)^sO{y$^ǭ d&1Ja (BBRd IfiJKAH8Ôm*{aw*T5o!}jlf9%$ei(Dj`8&]1\Y A'"vƜ@TqJr#1a((%HkЎu\`Ky!=aքJSk]ÝZvjP9#}F. +I eFiQSLЗȜOAfG]%;2$c-`D;b*G'dbHhW.դey(dE2DLpi`OȲDi16V! YVLBo>.&Biw{((Fzk>|ܕk ő c鷦K(eهy׀{րǜf-G v*=iy.J^*' 'v|Ca0 qb4yadYFOsl:AE=g4KPbŌ55(;C~[&wtd_;[@mjuC{?ؿnwnvkQU x5d °{UyoTV(!aT UWTUYU҃ߌNTRSm8{T %9UZ0Ju84Wpjw3W;w R!Ukgρ*8Wb.w=kc7pIϷI*[rm^窧:jN׼;~scw}]LQ68 F. pnĥYZֲvt}T(_jh_+J쇹$:z9ڤ'zNt'zqw{ T;۩ 'ܩ.qqre/=Qd$ O}/dǨysM*4%xc$H"̙F$mk;C{|"qTls!<S> d&7W.LJ}z^Am_m{qJ#R[ᬳ-}.-bh'L c^=H>:%D/)%<)Uݩ\[yxr~~.z0Z _`dd()JDR&2B$`J{ܸ_C6Ɏ4QN<:hd{bEFn]v-dX" U_ *6涁}||q 4KP-!GY^Yv"? Feն '0]f! |Jn7:2 ɷ< G!&ǀD=TsV2Q* ڳ8SJ;HW!?"\Z-fOHD0*TF#L4F!Z+Fp̠fxB#ÂHE15fmYО\-&/m<__ /gqZXj`ӱcUEa*2 (M`LFHpCJ0Jȉ U Z@) ay2BaJ9(.*FZpbԓ JS~4V+ږ,Pk0je!k(N$JkD$`BxH 1bH&bCBC0E0F2$ 0 {s?%wg3L+vc5̬uðoDKQ +J@_yYp,H~OcfAv"`e-÷K.]8xe.p{? HD /Įi[#@D,Ʀ}\FFeG~xwqyii"([RXWlKAs+^ܴR|)Q@324Jcqe$ёR Qm(cbq(IITXd(v< X;lZ+}zl foV?J= DP򀀙k7e `fUՀ[q+d {W{",`莐m}rJS~`tN,Yr湝 k1ێq8~3[̫pow>Q V*rj ᤸ䧨bt:W m vuB òcC J+wZ ގ : #_Ar`Ymi5$X9q+wZk!pEYw#رVs!"A/(;jEуS#jg:Gnp\gdW$Wkvmܡi#EuzV7ȥ|zN ~2)pu-lrSE(jI#pXU+Hȉn-RMab駥XZDsyuTfNgi)-LR䧥jBd唟2짥 _uT&HU~yk)~ZʹKOĖH5AKYK- ;i=խE ҳR8irZJ:R槥Y-E$cj:[O΃VZzZʄ2aRINCKQZKT筥\i)W o";r姥jMh5㟳rn%vH7PZKQzʹU}OBK RG5eժk2+`Q$OBKRG5'VZzZ=m)w ~Z=m)ϲMh笥l-笒:["W+Qg^rnQRQMqfJKQK)RJ}RJQV[K-e!`+ 5XC8ؾ#uHG",uaÉd.<ʝ;(A'~ ߴ@~o ldfq>q"8kicID#T?k1*#{hܸvA?;k -4g}FRZ(`&!aAw!0" dN<7 z_D}$ugT/B*aAZL/$Wk*﮽ym> /A;yja}ړ C5+H,QHQ'D`N^l ECOߗ2.S{߳ `~@5"䐺'ig{Bx  ǝƊ6P4 ٩z6&+8G .([ϴLa< Jvpj>C1olG@2VMd Wivj~tW zp_Ugʃ*FX*[+%*[d5S{bu{Met_.NK8Uе|4e"T۞rh6÷WVIj-*.ȁdRYmsȥZ 6%U? 4κv`Cwx5x&ilcT,[n.BgZV kke Za$m9Vr,g%Bƹ?FL "šs'}yAC;N#`D2Xϸ;.լ626a;L_f߸,9vdZpYO$pX<ҙR掎 vwt͚gbrr;^D @ 0PMHbc$Q$"X,:JJ 8IY]|g~lJ~ &g='9#u n ܵ+?-RNBKRG5!3R)o`jQDiʶyuЍ@;4|ЕȪv(xt+M\`8Vz<=t:h4@"|uc|u8=ĺ&dǏM[X~4@KfY|=q^>Oaܴ`hN>lu?S`__,NOߣspۋv:t/==[|~ s~r[ sen c3 :tn} +߻ɨ|= OR^?xօvgkS?3#s"׵k;AJo?M`wR&]0nйK {oqzFp0zaH1Fqx -7ώB" Ax}چ6߬fvU:K[H5TP~o}~zmry;hY[/ѧ}KJul}&6d)=} ?f2a})N㴥V?:I FIȉQG>yQ]w,}lWΚ2_&[ھ|aS{pR&Co09 K>߼'pz~7sI< 䖴?5/#Ln : _:((b;6=\!c@W2I$dߏIkLe3_lxX,ef?[B 8lgRCέ?Y渜#e5&.^-t.g Y+#\{ȳYr 1M0;qh}{-s69[CBק,-g҇疊l1ݻw춮R%&G#S{ EpYe<AdrvٿGr"%H!fe#K #$Jw.۱e; ^@'O (%X`s]0_zW:aJ9aQʛ.\[aXv]vFzvz6TigaQZs#k6糗Z/#hvF C4[#R@w %6,9 y(ah:CR!RbN XmLC7x>Oԣ<#06 y뛸#~L'ܡ1-KN܍E2p-t$B[FY5##E $Tj"R{>YFe.sp-=;bIzk:b']OBYi; m[^ [?ځjnD۵FiGf İcV`w'}g5j컼f+^!ddGQB !\PePPqaOpDabOJLCcnS&Kh/,^ 5kQ 1Ȇ#"q!nU_Cfmxor-v\UCXY@v<,. Z3PݼDKAd=H`G!MPRm[y[TkV 0{,eۄf"j6a.7¯jwnxgo,&XHNYi{cdzݾ|\V)D'}STOdy4Ԫك頝YdF'=3Sګ^b b_iuJ`e S~z$S >~<9ٿ{bxzéLEamE0O D>EafNDh~ 3 nz˗|JdphhY&4/1=g~h~I/6ߨvFe܎:#(YDrZKoɕaaVJpyKr9$:Je(ݞ$ܼR.׉-+Ye|&nx&Mqĩ׭u!h!eGn3LwŘ!*̌ù{.Ή:{c`5-}an;pt}.LL!) a+- K${Wsyȯ-|ZUnu:(nZ/ZEO?O;!=ǵsZ`ROr2>J]wL`FZoпw΍ `vO$bѠΘuTT`h*uv1E 7mWޕ47r#뿢}}QLgN/x*Em$e׿"Yj%))(T~\px6d3=BSZy})i 4WIaAJ91 .D# Z0:9ΩX f1O$;;ǑrÍ/:ֿmU1I6\, ^LAɏO6.M f3J,Wa3o]6VTJ}#PTHFzTܼ`FNz S*/њ`J\%N` sX撇X;˨ sy.iCEf}&UkyL,P$W+M3|7j&$T^B12[d6ZLe2rlgHb282ڙN(Zoq+'-lN00ׂnXs&LipysO>~×].֮bM־`ʟoMo^;yNJ= psQ[xj~//_)ޞp2/8yMӟxy ͱ\UCpfCBKTTѪ$4+Yde?̟*/ّUk]0 CL]綮?= wOш,H_F|ralvLv4#M~ `9g[WRB0Bm{q!%j)QA ̬7)[Ʒ5f:sP5aG]'7\],BXjK49P5˹0/6G\-qXL\yL󣇄Cxڂ̐z{I5fҙ`>)b*_q N13pqT{pq _t;gi &WuPr)C.JyKMe*8[~nu܆͏WcWIt/Wr7B. TNnQ!IH|D9] Df}9AEqmHqo$?b9xyݕsCzIJC%,PeRG*z*=C*-ۧEIݯG:bљƚrљoetRtYp}ڒK@m;: 8XҌ"9NMDey镣5C+(3kAuIȄ:3k1pU꡵Ϡ5:',KdwQz^. kqjP*3>B8s-hx9/fnjQqb@ ث\k`7'Yg?^G{_^hn9ɷeB倐+0; lh&Ӭ|̗/7`\ղ(ܟhrE0߲˯BO~=tZt[ |%3V T)d}`;&U<+ c!߲2hX4n5__ ][ q/Kp@@Zab;ʁyo|ԌG&wO069OHO[$sh#5I*ީ9LBEmh\dUͯ o,K5 Y52 ',h܉tЮۀ "\?paYk[> ~Eo&>Oķn8B/<<ܢ$\ n?}<~a!WRKw.ѐJsPoցqomѷ[F#8s(ă`<(Gw3k*s sBÜƛq}Xb,f^Sv4rh:ƻ4[=A/z$j‰áYjdE2H?(d2a]`=[)5).;o"R&${T*Z%o*jD ۋ0ꨮGZb H #A4`0p`5rDi~l(AJwpdS2LQӏd3gqfvx h *I(rXŐ!Ns$(]؇<:ZsEkx2PVGO.o#Q$ @K# ŹTbw3̕8p)s@F:f(g˵4PtPhdp(%rO**2[q8>y9R)]~I;#4zr%AΚL#H=Lo-1[x-}tT0D ^aXB&I 6Q1W-A6zvt0^QC\d+ǪS]*."Y?F=ҫң*C̮R܌ Rn>Td0礑goC%S.jR!:dނ }; !J[=Ӥ |aiRn&~rO5øS_;MaٓQ*6*c4JO3_72xyxi,:Mе kEI?ӷu2}z:Z(:`J.yMD.*r(i|(iZ<%~|'yzYa5I$AgAzJ3^_\:P2kIF.B$Ddrb7hzc: 15Pci?yJJOh|~5bSJH"뭀*bu-9 9Af nnOmRnPzJ?9(q+8Gcy1]('] *#; (qǃa`5 Y`%Ѷ$)X_ [&ZE.XĨMShM-"")ff,n-y#+<`8305s, 7Ivcй@--$ `B_tݮpt6+qqծ.t5*B'71K>O3^hr5dl Y-qOz*EtLc\dN:I 0N&++28]2b"1#c˂Y-5Si3c^H'Gix*n RBݕ3w B1+,sԤ}.b"8Fޘ|;!c Tͤd]nZ1@o GwOO/ VҰ. rVp>r1x/VXN6"H")!T!GXV"]_ڀ>LOT"bu3X -\Phm9WEgcU=-!8 v^e߱G?^DlsYkB"τڨwE(QoRh""Ny%╗qveo93 (֋09?,z R)^{mRЈ:J"AICI4=ZNl!*"kIҲ* l%)1d5/^FW<.C5!:̅IDJ}D!H8IH{pKӴ9ZRڥ9Y& QȄh*{I>$լ mff-5N˙5 'NEoCW) |;e?`-MˏaH?Y,F5wn tX D%#"̹AyGddQ|)RsX4e5+W-;.G/ӟF8`kh),ƴe-Nڳ+S .;͏[M rUHsZzi;o 'D1d(MU;-8$xz)^u KK8\gt̜U(ӣxJ`(hrm\9p~~DSs(c'enX]sn{s&B0h:;چ@#\r"s,%m%FR-p1µ@RJSݡt )N$kgGWgJzY_t&s\B2ƷUbE wg%(QjFUSbV1b9A@ z;ȠoqArN84,邳K\ Ύ.IeԜ tR7̐95NKgjrsɻY DłF 6>N;q({`;cr55|gr s@YS\]ܭ.W-/V16BG&q,֛_z2ծqL9>[SeE }]?^سc27:1tECp*A#z@wX Eݞ[tK$ X/r O7p!VQcXj$Ϋ񠈙֧[#~/i^WmDӁh;Rv/NW8oX/{u1z-ZރuT7{5FLIޔ@>D"2YYIo"uٍiE~~zWtu\*zK hrۄ+(%+[q*,?q'7D,VuplUa}A{Y]1KveёLI2Ju75!EÆ\{;k믶"^hKU>Ӈ{Nú-.3=6:|@iW?Ũq'Dr#d5OKZ]ebdB yCg8tq.cZ/ίkؗh}VdlFN9:@30yP e:=/r7 GgZg`\= hC+$T!ba ;NrhG͖EsJw+eo39wzmzVݡ8caSe<sμ i93<f|A׫//OSPEͥF[8+s3Ɓ NMb"%'Z=)GYգǘ'<0ӆ]*α󐊵RT)/?|kMPZVp)Aq -*f!87 E!Qj.OEjmojҭޭNtaQ#l=Qc Q@ gMgu s^F6$mgfQ%f#ޖIpr`ѡr ^ϧnv~Ox%d #솆O?Qr51~&!@;; }l;w"k+(8qfYL(H_7*ls-Q%\ WbRI_bNV4J~03E~(ǯA iOlMzai4% a(J XmcوЧl}Z|qAꁊf G;(J;в\1R cV͊KG5fDu(TGyI;b,)Rΐ&:bey0A+6??||7)mj >.?/AYL-0pۧ+||BsF< ayL&I`)G[! FqVDOn$\+,H!s'۟B诚Vï';U)+ ^0\?ojs`UAguxiRX{rD͹jt.Fut!["-zP8bA2ta;:/f*Sa Lp͇ScFUr3v+J%g-spm ,'C%G'M9)5t]MΫ1TFBul/fIA5׋pN10[{%Tnj"1@ 8M Q$s4.Nc^pp p~N1~6z~`"`,!cE2vhwʊ߾5xF{-HE?oKm&(% yޡJ.OkM+9<9|uA@ {3˨nzs=$LnY@l sHYoc YK\ =)ܟ_U@hd Uykɻ1_vb"d:BuL\jIש6u&e Hs 9Ya`cB ]YoI+Nut] 6,آDݶ7:X(uQM?YR_Fd\)zpYƟG]5,|c7N7f70it kiSIS [c`\G|z7nU쉉i`V"^3 ܺ7/ظB/Bh;[ !7FRi5kD(%cRۃԑc3ht\0:/t3?sDjsln2\*61-%-Mzkf%76 aKo'?8 L IE(V,C9 ӂ=Jɕ $m&tMd|8o@$u!'-HȆl8D9+(rJI EbC)!Ν?;;uYᠪ:@ztY $XQ4/WΤђ@Ml쇕T~/5Hw>fϛZ#A_͝ `n}VdU_V¯Kʠ|>Uߜ:r9`1:*Eo;XL&[AZu9}6.Uw~6~:s2- OB .sz|A0 g ""K).xOyn7V[yO(= 'q Ѝ:)sDQl0jC_$ b LGk iCB7p+? WhkewS(4^()8Wr.}61Y湇w楔LDnv0ؔpp=O@ik) C {![ q)dx-)x|d@}v2q8J~/U-}$^k5bē1ޡB Ga|4W0BFG;1J}q%oL%xWx,Ht!$^L(X[:ڀ/f2&Ң߈R]~T&?~qB?UaDBi at6ȩ\]TvVte[8*ȕRE_0gi߭pJ Nv/lN,Vsjly%}J"ƫWZF $ jɗbރ|X!d9Y{CԳ<__4~x0l16Y d~`@TԵB',& >n(|S329( #8uK@wOS2BJ9|]/NKJpqQ=7yMϢxe0 'r UM@u[3WU`ںHA[L& 'o(mu=j玦 Nѫ]7wVځO ],{v h wLJ$;ŧݽZ_Yby |$I_E9lN:)ht-9O*/MikL H&$>vv @{ۛ'uz(t<  Gh~J`)}/V .i؄٭QmrB>͓!5B$-U c;F~;= |I\;# ث%k=Mn?S-q/*uNĪ8ڎ~@6S^:z,%8ly)rRCif8GI^ m`C걏?TioGƛˏ ,s̃E*(kLf ̤C;A6:dV8LQ5~ mDO7+Q"1Uog4EI*ІHQDOWخ7lyXt]UK&%$ӻg>8Σ~ ٘۔WkۏL$F CL! LOY`ݾk\s+SᷡMZAKU!1fZٺ0'wk~'~ fK}}xrFJno b׋%?M礑.觿ogkgH=٫=h[lTD KP:ZrsS&iq0Jy1y{S DM#)X2*81'3n($} L?6O@ 6"!I0I97E$6;6}S[oRaξ_MFٷ"pd0eR4):0}Mfd6L$%f66h`A :UlA"Cߔ(5g #cbCXE@sd=`j ?TJ;up6(HFzhQ3rzZ-31])+fftEs {i8oRnPµ; ~ Fb.ȴ(EUbӞۻw>p8;swշΰ$Ԑ!eL5H -` -\a^OFd-:>tإ1`f!"PS+Sw㺭&Is{z֓:RȖhs|M|#Ƞ2wC6gEr;fG2g0'bdkgdi?uї~RA;O@0v,:}o);H{oUvOT jEO)\\fgBtIA0x^9J!(pp=E%g]I1686!c}cڰ}~㢞!|1Uc|^k"řl06:g[ |I Ugq߾ }Sf- ;vU/ã1f+P1Ÿ}eZc<8M<$ZTVE(k\M䵶O.z'lGjV@Ya4 5XNb `e:mE9}o}#n?n6/굡. !b|SMNny'RlhMT z ؒt1(7kM =Pdr + "(iv9Ov1I}JmŖty*[Yjc 6UXnvg6N]tCAfeJ(L,Yf2!'~͏"|$-~hEvhs!GG.J9A"x>s8>bO|GȌ쓰K?5+`WOk ubf;Tی![FrxB ړjS6aFɄVV9 O9YpR x@ 4se>{>U +Ig7;0ڀN҂?4)p0aH+"X󑼻$wnRm0B~>)|œE]v}yIE #d< tr:Ґ πЊo]"RN_n%8]0;T|⽘)N^Hϛ-E)X6_לsqq̮&>osv۵0*kMLѷLRʥ l"O&a5H^ %a&X# j8"/o_A 7 :2}nf8b9 Nvp"1SgәOUz7ʠ#uƹגg^6*|+QZ5vlx"мHYK(hTR!U/!d dmm(if]cWsB6V Bkh6cqv^:{;[ nf ?{WVJ#o4.*2@?MyWX=~؎}X٤\-S"ZY!jSՔd,*RR!?RM[6*?E`o!*jt}~FuhWVzd[0 ETɑv츞Rs$:&˳ ,)]ֲ8ٚd=F=DdQ-CV[2{UXtՃUiA|phGA(y5S&t S&$N)򒎛KIJ^W24]ػzooR(4)/@ vy5$ ԴP ҋ}(5/kd;xZ?I@Zl6Fx҈2rWv!"禁~c fLLnW_g5s ~ ,x1J""Tl]甝;ncns#y*AüDj7:~ư35i>Ι= ߐ=S㉥To|cN3dHSI\w%7U;h3Wb3RIXZ{qUf/%4^`*iٻ*c)Q$V/iŷ%{@[bAurCbwT9`"|&:%_H5KQmΥUtI\,9B) E ƈR};c'*k=GX@b;C$o/k-r x1WƢIj5K|Сf=^<ְjEvawFy:? 5wOs:AAL3z{;fm"Mɦ@飦|4**z[{;pb4y|hwlK#v?d9a-N"ȢV/\ z7^ M}O3MB0Imw*d6Em,:d)or%[┲*;kc.n=Al +hN9qPbysa(ˁe" 7[w-_z>%=-7c{[b`qÕu:npqi]&i 9f*2 Rljzu AO73BJN̓ /DC@Q<v`$O_S`I2[D@sJL}1zf(Jێ H6X\tJTK/N[:bO Y=+lEԙeYvǢqGbhv@hW,?:rFE39ABVZuq_cV/@9*BEdMQD mW׃+4Oa Œ%[R.&jHl;޸!a?~QcWk$kvBgmX8 R HB&)gZ}@\wd7*A;Z:(M]Nӟ اɇķ?\tT-eQxM[IɓQha=Xc$RZ2i.g428Sӛ@}#mA2CǞS=@#GVQZfr?RXg1]KK|uX4=j!Y go?[0Qd%@pUbMD'Jo~X{+r.;A*ŒiR2x 5(3 cC)8StCN=m{+DNUSKQh۠SU鵲1Ӈ1ۖ.?ʵL<\r> ~Y44:_hfɋEmY|fOc/+ oJȠc42Ak{;hW`GN;Ij%Y)8%ZM cB,&)+֠҄jIJo_fe,LkΆ7[Cc,V5.8Iw}Y\ tg{|Z}vPFI::{G <ص8!iP 8ED 5\cfTGrv5Of8hO5|tjcW,n;#J:[UMձEN_¬Uu3x&%:QqrJ|\:TG3ܫ:|pd4W͗.O+{ķ<ʣe3CeKω'\ӚaL!xBX6skW-p`0aR9+|Op@A'"PSR!qOs)@9fQ)m\g}^qvo2ԾY§w_ջ/痼r9J^cG[mi_ `EV7AjZ:e#syNwF/{YJg_697d/FʲX3D1o)^ݞYJ 9¯7v_E0T֣vV=7BqQTrKq/boOag zqc^I{z1;݀NӃFE M"S^ Pg%PuS[2hXFJ>}Gc zIC=--I'%"$ 9*-2&,+0ݦb}7 ۥwj>}p:r6$x67SX{/z'8zɐX0]Y^Tv1@.-7\]] rbrc}ve[X҆r9V XAfL9q5_*\i/sjsV)N&jm9k8l/x@c$ܥDz~|.jtUn^ej?3L1uI4OTnx g@iD%SVBg,Xq*7Pn6>$-P{AJ8rىRHF!MO: 6'Zcǖ$H 4Fb$~k-Z'M2&KlcIFl1n>zA-!f'v6V &wP#*e8Zm,*Sy;v`~ >?ޙw~0#fF=hŭ!& TBY6 ?V S gcT2W8Cp*I9|IA렿H7W^SOoM-PCfzrm¢Bf)Z hSxK$غ9LRr} j-n)bz{R@{00i9XL#JJ %[d_sd9͊QTZt@(c쁜2ɾ;H*%hmE^ُ/i>s%͍H4*t|Tn~>9/fh,VZ\S֨9'nmKg+aW_XIYLii(W7zU/w<6|^EZ. ]MH]!kmkl[9g'w;te-l~7t_˵ڄP4Ja+S,Xdhd]w,1=Ðc 1Q)$ 'cLV8ͨt'"/4Pxex&NIjN>™X=ObL}8BRTe"Jv0^C9tMT!_Ñcʥ'rQˡ:@GİNāTvزnx=\< Y2c^A~ڐݽz!7&Hʔ7/Wșyi\613%`NIG6T%bl:+59wq5Θ,jᅧ|ΗD{ӿD86wgGuq>_\ޜ=ży#kE5M%s y54!X.X\'* :){K&kL *;z5KѴL3~;?͐DAWER8 cZ,c6Lo&\@d4:JJIWC4'P zUA&R"Ck?g#˭pX1EmcƖea A蒀"pU)-Vݠ{Ws1z'|q:ӡ ؔ3ڜ?@fQ6nYFդ{IْdJqUG% FUeaQPؘlсQ#:.r.8{Vr1X,`ZUqHd ;7 0 E%fhmD<O4GYpGƾ'4?4'pFub㹅z?]pp F#Ύۃ|w7e%xtJ,! $EKJJwZe|-AN,x yVjd69NcNyl;nx`Qidr{R4[;V&9Sw059Ad>[3y]cbE |emvg*l4UL)&0{Ov E=C5MIPL/* \wo\/:h<:XN՝ [khC`T4k(r2j>-B$֙gs Yat=,]Ϳ:'B*w{Ao:$wWs7t^b⃣%kVQrQߝZ:nYOOf+!K)ޓl= gBѩОWĊ 輄wWsx$'@+ZzU,go\2XTvΤVb|zL؍m0.ZS6Gz?9ϳwussV;oew=JЉ|MDrűdK+.?j9Z\V ы?U,+cHnY=_ ,)iB4 ` sf$ɧµJyhWߨ(g(Q Ίz~#M٩.{"Dv}&$>֣[UKJ?q`9T1|fGLXL)l,xXz}6W18[Fƪ38+wkvmK.-Ǎk)}i5aA4ޟmj'xB=&cE|}k9wtam.\R8/ح2od%!)\lI(/ꃏvȥ9t3B_ C`ZbU6K61̃^gǹCD-`Lb@Y3JJX# FC)gIIkmWѾ!Uv!>)=,n}潻[;)ϽmŚi`.W jZ]!eh y]*)GcU-4߮NZ&/!`=Vs*\\'8 ˌoˍv]E4 =0۫].ۋ0HoLMGqȐX[H#:[rDo$*kDW BJ 1> r};3A;Ch1p.JE<6|-zJY"+h4}VzU$Au@1gr w̙/grw,ׄf@Foۮ[֨5&j;,Acz2v.>-|BEϝ't9]-&/-^՟^_4^_d OgE?ϯ_^M>D&dJ*RU?M槿%@.tLS;d?_qZy Kw7@Cz } _&Z/% x=z//>~{qlr7?FcYyi]YB g/b>|קդ4w׿;c'B(D/oO_ҧ/ϳǟ9v Q F4f&8.eXsjַ'ٔkB>Z,W#%ucqH*ГoG8uP[:ɮYfSb$<`2zA!u!'8k+Z *d%n79 hseyb^Z3}yHdSէơAddAJ E.(O^iRvߧwp/\5؞+-\v?>=PѺ`gTQE,y1ra#xoԸ LzO>3ȚOGףcV0ԯީOQHf7S𘕷;964 סO 5V6axd&k5_}',˭+4ʖ,;|52bD*mjiC΂%fZp֘%Fb)O\Rз I3e)oR d2H܊T( j[1g{BOTwr$Ffv%tLƎʍ[t[m.L8ijy(ثÛ V %}M\\pLL.Va-,F9锋w̦}]wsRVԇdNjZbJua0Z倮ꖓEBȬ)! C#^Ɖ~Xٸwǃ~"և2q~%3 CejѺ>N\9dS&9tܷ4maPB񽡟 Jc^<Ĩ ArN;>Jt\b 17 nȒ)QE)A$^qo85QQq͋w*{34DzAANqP DQg wH78ӫ٧[C:tyj"QP \3Xy!BQ:mۣnC9c\NhX|<9]_et~p$`cvAcU9onᬎ2ɐV*>&Uf=3 >42W0ZI_Tf~o kY&wCuLlJF1aĆVxX>RMÚ=FT$cU8v')y)C 3о9A0uv ϩ,=j%(As<1z9!Shz[tŜF (0\*څb M& Z}D YWwczs(l8$3˧Z;:?5AmzfO+Dbw %F:VɑYa =f7fW7ۗ6m!@t,lrn?^r8ѳn̋Z|IzL+#mM.)6*Y +O'.{ TM6Q!}}t |x6ϸnz''6`4*g ;;${޿v[xvf6vf>x.X=sO1sam3x0V@:'t1:ij1M>u y.;AH!]Y ha2:k]+br1sﺀ ɛ5dGPԥJ`H+)Zojl+fVmf^gK6$[}C F:x߁o)@ q:ܾTr #Z^݄D1CrP2S:EfYaC yce]kBd1M bF+X$K*e Ƙ=HIdOZ6+IgmP><љmVtf}-f-驀9Q:=>]vԺ9jJpդ{vHooGoO*+gTHW\iŢ $L:-tce HTj$I5t?յ6h배ރ8ƪF9aE~IaFYOSΞ IRL,DVQ4**ǢqӞ?]c&,+-Y^]S_ })Z&XrF{`)j*Po`eF v2e)^$x3~n-rf[ֱAͭO0uc3 yzI'n1vڀ90KE LNe4?c~uIԓGFPm'c#xN SC0{p-X-$^ΞCLJ}5;}$FcoUnlgM:.qVw?/_3?>Qu(4lHzrJQryAL2s i7: qv77+R13MɶS4RN{K -o\HE{X' H %[sJ%ɒU,dxCn^@W)k5jO2'6|ӚHdjzWG]RuƤrAM-Vh""!e"Ff\ luHQmA>^oxzw`d6)P fceAz9]m尘ɨdbIC! ^X#fZHLk%t OӍ/;Ҳ5&Ň/v hX^iL{6ԯ4!ZuԢ(,һX 9YLPXSؤ[<,R Y9/J-GNX~/.Ucm>K("I1;dM/l ԂFc,)<4 LwvvkNS/g ̃ԻU{M7O1k9JTbPJacYcyFQJ]kbMf]mԆEpZ3g aQY,RՅjL ucF!M-,k}"m l|y))B?}=%ŕ?Zgj"/膣G eO_5[X/k Y /'S3u$8Y DsZCŨ$)C _n5c}BR>dRD֎FBAbm o7xV~s5r _ -dKSy%d$֢u*ʘTɲ8ӹz&Wmg⎳E{k_o(%w}rvN?+>1'Ww3%+,-IBP* V?^Id'd,y5"]KFd \P%D2%lX҇Jag.WL9}yhlvGo= m`_2G2UeE (@8hk,Z"f1Pm#UBΩJZ)T2ۅ;5кڿ݇V)B(@5Mu[IM}|`mgb\]1 Lo%[y\^E>e;,..`fn[_,FzσzlzՒ8lѶږ;{h7RFc BW#1̉~/"r"|dꃽyh[V\qDOl|X>([ؐ={.G[D (#1 ]hʙjƍM!MR}ITdC"P:ZJ7e;^6bI8LT7|Ȓ%eĎfkwE֣!L" &Gq4&2-/U S,&Ơ7V s4q)WKMMVihU0`9t: GOQ$Bh%mnnv(6^;I>kN>^/Z|:-oL6 !i GKQX;b &FqTa(^'`)%mw^Sn>/>ugJ .{K{.ـQYrbu!5Y)hƅ:!A(8a_ n"=5۾a ݢ{.E|xpzZjݚii4(n ^({9w`3CwΒ8pUVl^X@O(6uKjtK ŘOdWH7Qqs]sRxt0j PҷbL8RV_u:$ c`?YXkޯ ꉿ5<.(-kyGYI*VU;zjԭ<[|nZ.̲L|6l=,?'5\X@o&=y +rC.u+̍G֭ԧ:U:)mEN2](Ip.JڰMxm7o^/Wbދ܋Ûd9b&:0k&&CM$n(}xB̃`(ȷyR?Cf*vk[{NuMVKC|g:v47#ri e8/fQ|9r =Ⱦm%͹/!Iǽ椠\uSѼYR7';+Ճ8u[-]{mp=T A IS-^ׂ=|CZvI[9R10=YU/z퉆?+m###0' (, :):@߼ i2xD)0,!(}=IkkRў@>u)iC@Q "jp|2ν9ݖvԭJA-->͜,AZ%.X[춌\̮Sִvnߦ Z$6[[.4Ɠ@Ws5~]SNu_ ,i鹮E 2I:G#/,O۸+¨ 9b!d?`Zr(>T6\lG-x momKKlm;Z8V+0"E5QKGv uY؉4jhͺ6-.*0A{#șEQ@LP:WDQQ$%%\IӫiK&g_1U%נ8kCJJ&]?v`+F]4Mf\4 A%vdU]ʣj,qx#yߍɯ|{k=_ѳlD,}^:~ߗr&I0s̛SLY{U=&{!CaQo6i/]q2XUjsNN9'?Omӌ=7[$*F[2O oA?D:cћ> _vjN9QYb۽XJ@-tr8|6[K@ LH])&y&W,t“3.wǏu(!^dZ sy-:d!V:fJuM*CHGp%,{&+.yni=]ƔH߬-@ ӖY:$0%"@zFӿAls O:k$=GJ&(/7h}/dglt2NGF&K Β%28kSyG!b [9>R?S: ^/\(p)sFuIJiHd0sO>NUPLa81.'cڢ`2w;ؽ4Av#w ,OV}zd<'$g{6*CϗM$̳h=IIMT3 B uEל Dq>Tݳ*I#>lQ캎X/0VcOEYtSv̜ҸO-)0M7o^_Glh<.NGrhCE+G?$C`vE[Q ǵ{U2vk"O-y;ޠV=ȓVCfEZ2cP]y;8PQ6ϋ0.|R}hNbQ]-k YRɊb&xAcaPl;cX:{*ba+1g9ce>z1زq2G f<+/kiBZ×5q/.Jki{('_6BKg{X?jj@hocߚu{c*crnAO O"Y֧~@wy\CPa)4UѰkb9arRXc*c\2)M&l"Et\ ynC4^H) юe*COZCJ` D2+L#2^0F˓ uܛhE.t6*-y7fޭ RQ*'o vE4w+#CBAd@[ UQHRm!bp(Ć_3% /A96lZذTbXذ†P1R~ Wo&-&8c=\%Qm# *ȃB2`oǡ). 8nK ^UO!5˭W)n=']!֨Qf"a `s;J-w,ϽIU5hK}00 5s=WEsj7 L:s`b-(ݬX[Nj.DkAu4U Tjk[6hGȜL4 9ΡG4q-v;*eL3Zj零U_6S^OnpemP/_ NL`ѭ*sZ~?rH}Q=WRCmz?*[?ߊaY<ƈ*i|@[ƯmYcQ!'Tv|~@U;_ $';@}KTo|oH%?]x//sV-V{_wͬK?`1¯슞l-[]h޹}wqN9J枅 l6Tq@NT!챇wsV@9 4֟ۻQ>vwځPo ƒ m 2Dk[JeNj:V"?*l*.U A}A-845;hpCn"$x%BBhTTi =ÃZ߭ɋ}liu'41+Z "6..;XnlFKf`b"4:w 1Vl?XSKX,fGWLz)JePt $vGDxI@adޔxh$=Ϫ'nur=/Q݁^x_6mً k+<hQ3aրؓWc 8ր"! _5@5\73L(BY?C)r% CN2ωfA. u*mryPU`*jx*2 5)SovM;@K0lg3I:X ו{] 6t3f>#ϙ$Y>:=E!G3ӛ",wOYVVY1/9#{sYTX' W*I]K3ԏ?GY]?'y,3t; YK:]}lV銠Q!) whJ`%.zT}^lHVG8{ŷ2[99UYBH|se2b5zt&'3ZJ;F@ XF]BF9SC2<.6\?,zB; VS bZc&46ommrثDEIWp&O4$w= G3޵cXD@ RJ;Kz%.g+I|7^Blp vEINiRǪZ\}_|C Ht.bTB)'}F{(l\0ۍ9#㧤1[pduYyD>~թe :2[ϛi@&%MR {K]w/olbJx~~vfQ0Z4SrȃT9g^Lh-Aj.=X)Zr ei Oo_Ȍ"]3;E]Z7czOlm|^ʈjMl}HU3,/$K\]Z^RhN!%,{]xqߌd'%@b I-f{f=]`, cӧ&:Ѷ rws޹'/oa>`+6A .;Ե͟vd. b-PQb\uakrFA>J-މV$%9`rA=(2T` . M4X? 8e׼ϔ#$8j)-3)B THgR&^H搱VԒk#(KU>-P\-˖B:] y SUHB' B悢He +P$yWr\)FӜhř"-\Jf uݽ)#nU͆w޺qWKI.VRD a$qHJ:sW)VAeKF2FG>V`]d7 ܣ3 ȽWOlojcT ՘{HLe\y-0N*+$kڈ&^&R;k}-]h.j51UZ%2LTAe_VK :[CBG6=,?8سa:Z bM"fBSa}ىsNth+#Chlӧz|hO׫t ʶNv :mxp->u`SA =Y+) ig_ؼgr JNWoG,Q7Ҷ%F}W)VZQlL{'4&P[ q gqohC##)4USKDKo=~)A:v{V.xi]ʦ6n (ãF `/C7VVX !aMH.9p&gV{#4\Vtd:2E܏_SmE/r3piPf`5 I])MF/$ʹRw`Q)RKHRPu^5SB BhG 7#x$gQЖ x@=jo?4Y`YfqVWU2f.g60E5L-[Nܻ P?];?uO߾.?5u|y:1L69UQ9+ ?c v6/h+\%9{厔utCJOWi:A/j.xzҐWtJe$[WNwnsLk9C[Mn}h+W.:ji qn[7,|.v X ݙ%W=~>Ǐ*OCucpP4vskp!h6!}%BX|!E8wʁ{f9w5DjJ u>^*9Z};;s#=7/GY"DDH|܏Uy^*~{G7%==ޖ?@rEϷ]^g^ZMyM ㉓EaRr3T*x`f79*Em҉(PԭDaB Yr;b$!W$ZJA'F;ݖh.(zְYAª7N ԷXHЂ ^"PD=ˆ' JBy뜤RBS͟@;VMpVbQr$w<O|H+6+l>Bbh(f;L=CxL j%uib-@M&,@gx1!pHf^qb\-MmPrA+=?4IB{rj9`ZBHMaro%\y(JIKqqZQTxMYLFWp2vhY8m&9_&A*WDis$E.,ndQ,i`Jv`Plݍ,DImv \@W \-A{C 45UC8L+\`TŪ&ffL3Lh\IptH<5aHZ` T#v#Q(hX&ǀBA vLjddd/562~X9NPz:Ci*Lp342 mqcuHhvDHyzO)Ft~d1.M#_&53RGo445IM6#z&ýYRc|8F3O\_PM-!^VBOu((_w!/BEe|';~ <7|XKli(Q?օ|MBѡR(+ۀ"7Ń !TJA^cX8C5ŇXWmqt}ŲV(je Y9r:<]?4rHHS7 )k>)ԑqdMRD9nBaan fB[@F)8-(n ͝BxVK+ xyl](eiy5 3r>|.Ы@͗9fYfaD3ĿC9T%H#L2lpw`}a+Q Ғ@HU(4 W"+1̚" u=Tq*`-X<4I8uTLȥ *rmΥ-- waQ[;K ʒx@eP^t x.cL)E4gd1ZrlhL%. [^!' LR(^}BMmLzw%{w=Pq@Mme&Mpd-1q]>/|^ !bnu\qfVbcu]co\__)W˿eSE<. VF_Uך x9zTݘer]eӐS]&sƏ%bڝ,{N4t#TN k22a7b$3k]ɇpͦ,8MJbC?Kˎ5=Kix q@:ik5lcgh4So@]9̿`d&4rΧ=,Ђ#f/' I8=(Z(GTVi-i(>AF7 .9ubX>92Ts9tÊ#x?Ee;}yَ"iv[>4'Z+Yn#gүfD0*ɦHe''Aj)ɤRMKU L w$~zD.4>}tTNFր1 GOʡ}"'JA䫠_z_:N^4=4k;0(7ȵ8B;|cn9ӭo_no m/uι=sN źNWm7ƛ/P:;Y%^?YFomL$^2dc.a{۲ :S3H dA?䧯T^ɇ=Yk3|ݐ)OV3' ~~; >YK| i\8Rvu-}{ced9*4֍=?ri&vlw\5vl\QK Z޽#wҿܐ d/6\vVvkD.nqU>̾YK11(?׋I/xZѓBˆն!S&Fv )G`)TX&潤d$~;nhZF gmǾt HfǍn{S&F +_!?ibttaw5œ gr#EԯE,xc'Ӌ; %+}L68y,z[\ YdcN<)++k'*_h$@G'B!0hX ,{%DK!*p??*zLɷ/!+i5@wUI!sm<.N9ypj}{#epԡujlQ5s-Avp\i^9mؑZe=٘ٶ~5'j7vs@Gn4}@E%D7[~ HԈ;!mSyѢl#>m\>Dq-~}.kHG*)Uth?(h>I)IaKH" ,cSW[Lm3׳Y]Un#xv'p2MW^wKkwEJ<3W6+ű?9/3]3}^,?V+^g OV$VooIok&RnK?|ߥT36G?坹᫇uW'pecB+vOXղN[5fWiQo%5sW}~$GL ϒM^`DΥ#<8U V9;]QfS0J,NEXD![AkK>?nLniqpYb4V鏽JksQk*ɤ-.^%qƚw~t_11C@mgwM=n3NquI2mJ#Yxd0D@AǧF$"9ڦD2# 2,ԓ{UELwVP 캤Ư>2 `* ^ L#";Pb iT7"s ?Ɔ:^p2XZ|>Xk۶9q2&M=s[I}03~g%"|d}}93YBfzXcv*6KdВ rƇhh .gwzk%J5[X0)Ȯy(m4S\.L:#Za*AI8kB:"9<a(Dpئku\VRA6dbo-4B?o܉4~!DZ]u^OC"}5{9X$=ƢvcT<Fl|Cz U1̅"H5Lz ;*k9vZqb*m2kw9q(I; jQ]C-^ߧ {Po^}%!nm M\?#%Y0FP #,T1yPdVu##O*c1ׯwS uSս$Ȭd6!zcVuxؗ> <>oc>)/U2A<˼I9%@i=yCqݽF\"Kv|qҜxMzQpTLN+-"CB#U'!H5hQ5o]dEVOIHr$-Eε >9'CICqO<^)=aXdW2PyrK';Y|IΌG>>(g\75dS(yᒝkGAĀ햻-8aE L ,s3=+'b`c`FNhؤR :w|m3 [HH^Z5Bnf-Fг}jeSY*6,?^R+RkĴ(tW?ȏӧmndp~E .S@ёR߫Ceߟ]emRItO&I>_JXEը2h\zٗ|kHy0H 0}? 9'ap@l&:Ek QgP˘{Dp} 6VF/]f03GJ]]/|կnE//e~KLo^xm&,,b朣 K\;L_>CVc'|(%5NI ٮb; *iUI>@*19oB}p{z؝/^̧Z^C@8X&4݋VQxmQ4n8;`R\[qe)j>GE%K@"9wމ3IiHm7=?Eb$ Yr# & _ؚOx!Z7@o ΂2- lHfB#{!AD?lqr*z2JXq}߰yh,Z:2jS3cF9l {btJ ߹ď<{8kpuw/YV6:_N,6Rh}s2 1Ih 6_H$>^thlc3Dk:T|'(^eJ)ilL -/kEބP9_=>6@fF-" >=1`t7v7bV;qߧsd[_tN~O0uW{sc) WWi.!X&A ;Yy0UK6D6xs%3|+r/sRl/7I"Q7eSM>Yc7.<~SƬЦ5 i=pɢq  =Qߊ=*JM"^Jr%LD26h{[Fj93{Yee {s^=^:COQ,`Y*n߸ȐO^'/zm-Jqrz'/=`^ReݽC9|\v>q+u# ƻ9v(o?YW8H uwZ+D)*-qqPQ }gru[̻NY y>@~vVoj(2 h%_q0pRYw!`4@H=yn> m8k~Osi|歳frPak "iR;OD#?jSIEQAMQ]]Eji6z8]--݇9.^3kEŤ\1 fע}t kRDg@ΈI.b سoζ5qmWL['q~j~=3D<.Ff]?; r~PI\*E8l5fU&uMHSZٵf1DgۺH&M+[,]^=bW?Y2NPi L%r_\V}բ8 )վS 0V IQ=C3Pߍ!f8r?ΖmM{}n{\:IuZmU~gr?IAy )FaYtzIA3k>H\V2߈M8tHW~ hjs.٨^ ⼩"J ^!.s\1 n|5.Ww09I|~8kʘ{\@iHt5&7W02I!<%)N q˥DE˲R^≭l/VH|ZӢJN5@= %X߁7z(F:a[N '>9 Nc}zgo{^7@_*/@e?%vcV?˾?S_UΫ1@( =`*{TY^7/hx# R}]D^uLya!*f`(ydf@4oki(䝡qׁ?i;Y OJw^+=m+O6tNTpoh_7ë]Rj17^Z B*X*Z+-klh*6*W׹m}{4Qj#?8F> !&-WqbuYŔ65XE.s-W߈,ߩOƻCEhɌxd%T u~^{9\>+dosL\QYGJ>]hL*ӑPQRk<4tQBcoxAi2o*OœrS.YERÔgCj+]a7A\{RʔJ4x]!۷/wWF"!df jf%3RHSIjjv܀)aS}YNю>֯,cz9_!4ޙb}0c7l_"O5чYEXKU"2"222PxqCX`+R_E uQY} N1fJ* %6&S`'&/?kBB([N~5尘D+g^>KI&~LJ,L|vT;RsgFQtm0SG@ j3:HM\ (@tH[1ĊȵP!B@tJ=ש3]:G,uHA8p ȸIS,06AJy! *4JF(H; Rk"RN>5$EBaӭJ)Djц`+--ErpPT1u@<8 HJ.!F[3Q&~_?{HEڈ(0Q*;w?鯨O㙕 v =<9i-,K#E{9 1~Is"~)@~]: v>]ZL˸tPp\I/b"RX0V҂b2!߾ T}WQےM=6 n3xU`W K.B r>XhV` =.XE-Yf9sB e$,w5Hi}cG ˂,G͉쉼G>u8 &/O+Q].O RtZ/Dm]!0|G 0ќ-bWgOnBZ^GS㈸ FZ@6bc-6,rB \>˞hI!w)#Ġ<+y~.t@6@`G #qjt)L#]$#ēMU~? D}t偳XMj k)ڛTղ7-=mW{FG#ɻaACB|KYG Sq$)d4`\ŠREFa>RIZ'ӤfTxo"]Rm>rJƙ# ]aU!4C% hK J6+,ڍfJlK3KJ`UR amƛ?bf׭0?EYrMϕ0xOS}H5e xWA.%I7sLSb ف7$űRR\s׃a-fDZBZCoHLU{raST_,Rna!qDi)]"4ޘ.f^^`?+`ca!bIN$&E:to  w29L(V_*]9 2-tU”]XS|< wf<-eGZEK-e4OKK "F<3LZ333G-=`-(+I%/T_,R9'G-=h-%$OK )oiDxҧX+uL'cJ-{Z|%=J_ r8Lۛ.><<'yRɯ_n<}h97Ġ;9RGx FrͲ`N;6X )*5=ׅߪP Ig<\8;17,ݪ{H,nfڛ`{ JL) $ wҷD-}p.m,9A!PW&XP\ٚ'b{ẜ(M$ q߼%L:f<_A~V(TA9["{q +BTNrO?;c3v֐ttl_f>LO ,179c}=Ȫ:߾9׋z$((Qy|8u |6f8F]Nlv a& JG80W۲.9v0 IR `p}3R*S'B+*2ţSj 9#7n[@DrEtWֲp*}&b˝7.rDa q9|MD7Db ."\\\X-׿LVmyH)yMX $ߐ l/@2 fC$ZSE(oo:)q^`Gԗr9R#P"g)O2۶EȤWtL'FJv;>Kޥc:q^T;9jj&!C Ĉh,*O0q祲y?m^3k9ŞHu\tЇ3hY=3ga8|ޯ&W f\ 2X{*y8(<^0Ё*m׊r`<84#i#R3~Y#! RiU?oi`7Ww_!|އro dL2erU {I HQh/VX0Ƒ@4u&K ו-ƌu g`{V#0(pCL?(c a IKTQl Qiѻ$ Qb Bbt%7+ R5iTIΟl)._~UgM F/OdI?hUGӯTRq}@th;'RuԊ#JIʌ'O/uOǚpA_R*)@9 X< U%KhN'7uuĽ).{)U\дє,7) ݆oN:N@fv[㳓Vm&Պdʿ⇋ e_YabACMif:QPJ_ѕ,6KX _9׮z0'ݨ_ O_j1,ecP"}f p'Jz.:IjvG[\6{#sT|ַB{p`FO-@$fcrLHhl7/]ys5fl -Xaii+%Kq9OIv%U~̿Rm]lFRF2fsì{\p } z҂'7!D7a1A?-t7_wJׄ@gm= öKjbH׼QsHp8eNQ֡`,b֧ZQVlfOeN=I[l&kߞ,"SUR`AigFCN i$C"kݎEcВ#L5׊5/i\ 6ՠʊ'FYaư{xH5[ѴKWLǝ8^wnkvgtQ!?8or2; WWuAS ~?9]4 ߟwn*?h:ƌizq9 SFV QiRCe;:NM3A>?hi -#PaRl'% y*Dm#&8`ݚb:MǨcv'VNnں5/ϴnmh W&:%]n1>~dݴx[S rTu< S֬[󣞙֭ y*S,Bxȕ=YZ]A!GꡆƊsdY,|)/k7 X153[> c/UzAZꕼz%m KI oefB>8>tZk蕺ѭ+ڪԪ!HN8=noj]h4]McPe݃}dƞdG1|8 \jAz"K$+Q';2%|ZcFsimh WF:nmc^XN1X]Y͙ޚuk~'JukCC^.ԩ1+z{M$uz}LҞMg8yW;i^6ꊹo_͇sj>veO1XƼ:EYs$Lmঃ>,+?|~or˧X$cʍ ҭ~&_[fH(kU$kc$Q=z$CcT/՜M$ŜX"lDrAHScJgW<:DP5[Jg~v܂VZonBYhU.mʨv}$lGA" >КS-Ӷqb . Ǝ`>V7(Z]z&g7ـJ)ɿzn[sf gYڦX_>~S~MV%_6Gԕܘlg-yH?mS_{19_(ejrK֜G_]K&J r2yH9Z'k{tI/JƭbGd#r\:2kۍ9y cM/pC44o{WBHnw+%/zCWu" W0Vuhn=0CG(5k4[cCOE\ۻ)Bˬ".`0#7%{㈳~ea+ՃF+dK18hR]0,|zpS>NeZĜPbӶXi[CӶA#ܕ،ׯm z[}[Hzah {X _ER<ՈۏWBJ l?jS1jx [K=+ch) 6w0s! U&u9jc H H=; %6;{o B^&ٔ6XcjԍR11nra[M4O>nlq[*A%M;.N1t T y&ڦ} Î|Ba'1')sA*?e+?]wX?7wc%):$-{_E5P<:%")8%Cd\$)VIJ7~^ȮѠa;SP)Ib[?: 0s( fY*T& ]5{|:&-D͖ߣ&O2\`є/qd#D^?slXEӑz}΋% 9S1C@Sz `+LJc2a'VVK= 9N撬E5s6HRZO0ЕD0FTh T)DYV9y^[Ppe'F+;-P&Ksh:$C("мyz`Ą5N "[ aiv@ @ɈH˲^wfBrV߱v"a< KB:*O "gO>CgGFjdw|L(%:&0áoa2q^WۅJPh8⍶(u(AziI|5:P/u2 Yިʅ`!1Z*—uau$> Vįzܡsd(O(s|nQeB\UV*Wg.3]U^gu)RJUYiDMPBh2v[5TmYdF2%x0u]9\Pf([/:LI`fJ@K3Dny*u9 @`Z4R DS<΅ UFZh^ ̥PEz ѹcdG3@+:أsƵ^%{߷HHiU%UIV+R uI8MIfK-N*^یIl`vFȔ V" $ %w* [=:0j M %v*Ş:Dnbfmà\E[ XzGjPDcKo;wDABs:0_uBM[8^;cZʈ'30j^-Y^Vp`rSV˨LT e>fnRgL;gQהQ'{$m`* [^d h>ocW(1BIPs4 YdZkI- PRF"6WU(-Lq ^3raqT҃ext}Z1k-{rpe&[5@#YyDT B$S#F}sa0:P y&bS(pצwc5=NwxhMHn]X+72m ox7/0~d5{J~dmzλKy#B^)mΌĎ#L_45r>w t?Ǝ{;] ;z; risӔzGڝžo#v1]/]ͷ?>Zpٳ{wCZO1B^~=ڳ6+o~$7v֏ό ˯mu7_ yϳ2sQٗB ξ:nǖ[rʽWN,˧gq0:bq1 瘂XjAA#&kk{{OmқA n5Vs{h$($Vec+u Q%yO5ju'5[ځn~(JCcF ֍Mr({HVZ^us!l30Յؐ\Qk;\T BYS#&z@N7-q\'0׺ y&dS<5:»U&R11nrMNI [M4ɦ|ĹL»bb:FEb4ޭ y&jS$lOW?5~ O6>TSrm|rm Oɵ%Fra[%^=ROP”wkk yia2(+\JYT,#E)uVEu^h.s[ >gz}V+92:3-Nܝؚϛl+,R\`q rBt"W򆴅\!y4 jzn迚 3BRWʜu͔sFd z[RYiyelEnau@n% k ʲM2+V:g(꼄Y qz(wC.~]]UkqU7Օ fnu7CPnv ޭ@I|yW|kF }g-WmHOVNc?NSzV#9b񜋕/1T|KvF,g d3RLMP%B絒lv( E܂cu™2|8~Ymmӕ w| gHɃL_jdw4)R=tr]qĤ{q$f ܾEMn9yna%WK.91 ffٿ?J/{j>G;p!ȚGˢz,v_U;bhQ|FǦQ<|@q{y= ̪sV eWMɕZ4#ZŹ̅O^$dfx~Cq ZK}$OzZeu0yG\b ܟIxTH@lq ƶ!.y^iD[Ii2˳"*)ZODp]&Vb+Y4z$B<݉Q1|mKYD&|zZI|!ukUא"QI޹.Ey|Oj)j\yA1lPj zxRi|HSлn{|zp& mXF NWկGN93J3Υ"S1 @TwȈ% `,[RwLE;Ih*it!#1Y9oܸ'9|G,EDd] b|P|ȖN?'!Ub JX4gD㕩%>\[BI߯o?WM~Lsa˛u)ܟ'+px>loO[V|'[3E=ڝ×=!v!{CǃO!vG0&20W.mY cRKPzyp ""'Y.a$tQ'>wVB>F f@s8 +7w'+=n+ ;y+E3 +EfQ >YQ[)Ax#mL+7wFT0+U[H\oG dGbR5var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005740343115145356062017713 0ustar rootrootFeb 18 13:59:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 13:59:26 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.001550 4739 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008252 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008286 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008297 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008306 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008315 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008324 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008333 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008341 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008349 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008360 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008371 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008381 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008391 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008399 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008408 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008416 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008424 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008432 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008440 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008473 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008481 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008488 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008496 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008504 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008512 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008519 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008527 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008534 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008542 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008550 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008557 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008565 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008572 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008580 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008587 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008597 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008607 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008614 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008622 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008632 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008642 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008651 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008660 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008669 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008678 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008686 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008696 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008704 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008712 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008720 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008727 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008735 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008742 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008749 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008760 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008769 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008777 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008785 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008794 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008803 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008812 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008820 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008829 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008837 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008846 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008854 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008861 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008871 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008879 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008886 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008895 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011234 4739 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011262 4739 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011277 4739 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011288 4739 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011300 4739 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011311 4739 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011322 4739 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011333 4739 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011342 4739 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011351 4739 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011362 4739 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011371 4739 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011380 4739 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011389 4739 flags.go:64] FLAG: --cgroup-root="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011397 4739 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011406 4739 flags.go:64] FLAG: --client-ca-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011415 4739 flags.go:64] FLAG: --cloud-config="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011423 4739 flags.go:64] FLAG: --cloud-provider="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011432 4739 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011478 4739 flags.go:64] FLAG: --cluster-domain="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011500 4739 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011519 4739 flags.go:64] FLAG: --config-dir="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011531 4739 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011541 4739 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011553 4739 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011565 4739 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011574 4739 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011583 4739 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011592 4739 flags.go:64] FLAG: --contention-profiling="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011601 4739 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011610 4739 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011619 4739 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011628 4739 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011639 4739 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011648 4739 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011656 4739 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011668 4739 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011677 4739 flags.go:64] FLAG: --enable-server="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011686 4739 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011698 4739 flags.go:64] FLAG: --event-burst="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011708 4739 flags.go:64] FLAG: --event-qps="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011747 4739 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011757 4739 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011766 4739 flags.go:64] FLAG: --eviction-hard="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011777 4739 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011786 4739 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011796 4739 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011805 4739 flags.go:64] FLAG: --eviction-soft="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011814 4739 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011823 4739 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011832 4739 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011843 4739 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011861 4739 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011883 4739 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011895 4739 flags.go:64] FLAG: --feature-gates="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011911 4739 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011923 4739 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011936 4739 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011947 4739 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011957 4739 flags.go:64] FLAG: --healthz-port="10248" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011966 4739 flags.go:64] FLAG: --help="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011976 4739 flags.go:64] FLAG: --hostname-override="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011985 4739 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011994 4739 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012003 4739 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012012 4739 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012020 4739 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012029 4739 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012038 4739 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012046 4739 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012055 4739 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012064 4739 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012076 4739 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012085 4739 flags.go:64] FLAG: --kube-reserved="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012094 4739 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012102 4739 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012113 4739 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012123 4739 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012148 4739 flags.go:64] FLAG: --lock-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012162 4739 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012173 4739 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012186 4739 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012204 4739 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012216 4739 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012228 4739 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012238 4739 flags.go:64] FLAG: --logging-format="text" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012249 4739 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012261 4739 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012273 4739 flags.go:64] FLAG: --manifest-url="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012284 4739 flags.go:64] FLAG: --manifest-url-header="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012310 4739 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012322 4739 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012338 4739 flags.go:64] FLAG: --max-pods="110" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012348 4739 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012357 4739 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012366 4739 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012375 4739 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012385 4739 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012393 4739 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012402 4739 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012422 4739 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012431 4739 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012440 4739 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012484 4739 flags.go:64] FLAG: --pod-cidr="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012493 4739 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012505 4739 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012514 4739 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012523 4739 flags.go:64] FLAG: --pods-per-core="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012535 4739 flags.go:64] FLAG: --port="10250" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012544 4739 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012553 4739 flags.go:64] FLAG: --provider-id="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012561 4739 flags.go:64] FLAG: --qos-reserved="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012570 4739 flags.go:64] FLAG: --read-only-port="10255" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012581 4739 flags.go:64] FLAG: --register-node="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012598 4739 flags.go:64] FLAG: --register-schedulable="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012618 4739 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012640 4739 flags.go:64] FLAG: --registry-burst="10" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012651 4739 flags.go:64] FLAG: --registry-qps="5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012662 4739 flags.go:64] FLAG: --reserved-cpus="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012673 4739 flags.go:64] FLAG: --reserved-memory="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012686 4739 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012698 4739 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012708 4739 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012719 4739 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012729 4739 flags.go:64] FLAG: --runonce="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012740 4739 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012751 4739 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012763 4739 flags.go:64] FLAG: --seccomp-default="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012773 4739 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012784 4739 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012795 4739 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012809 4739 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012820 4739 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012831 4739 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012841 4739 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012852 4739 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012864 4739 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012875 4739 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012886 4739 flags.go:64] FLAG: --system-cgroups="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012896 4739 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012914 4739 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012925 4739 flags.go:64] FLAG: --tls-cert-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012935 4739 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012949 4739 flags.go:64] FLAG: --tls-min-version="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012961 4739 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012972 4739 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012991 4739 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013002 4739 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013013 4739 flags.go:64] FLAG: --v="2" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013028 4739 flags.go:64] FLAG: --version="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013042 4739 flags.go:64] FLAG: --vmodule="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013056 4739 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013069 4739 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013313 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013328 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013340 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013352 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013362 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013372 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013382 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013392 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013403 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013412 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013422 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013433 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013477 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013488 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013498 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013507 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013517 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013527 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013537 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013547 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013556 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013565 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013574 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013584 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013594 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013607 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013617 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013630 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013640 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013650 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013660 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013670 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013680 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013689 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013700 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013710 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013720 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013730 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013740 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013750 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013763 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013773 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013781 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013789 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013797 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013806 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013817 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013826 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013836 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013845 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013856 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013867 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013876 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013884 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013895 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013904 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013912 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013921 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013929 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013937 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013945 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013953 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013960 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013970 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013977 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013985 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013993 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014000 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014008 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014015 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014023 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.014048 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.029977 4739 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.030056 4739 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030310 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030347 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030361 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030374 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030387 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030399 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030412 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030423 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030433 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030480 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030492 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030502 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030512 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030522 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030533 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030542 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030550 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030558 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030567 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030576 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030585 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030592 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030600 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030608 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030616 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030624 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030631 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030641 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030652 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030662 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030673 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030688 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030699 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030711 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030719 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030729 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030737 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030746 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030754 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030763 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030772 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030780 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030788 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030798 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030808 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030817 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030825 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030835 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030844 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030852 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030860 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030871 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030883 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030893 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030903 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030913 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030924 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030934 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030944 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030954 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030964 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030976 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030987 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031030 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031041 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031051 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031062 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031072 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031081 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031091 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031100 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.031117 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031486 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031512 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031522 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031536 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031552 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031564 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031575 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031585 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031595 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031605 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031615 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031624 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031634 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031646 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031656 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031666 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031675 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031687 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031696 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031733 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031744 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031753 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031766 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031779 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031792 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031803 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031813 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031825 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031836 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031846 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031856 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031867 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031876 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031886 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031895 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031905 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031915 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031924 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031934 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031945 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031955 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031966 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031976 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031986 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031995 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032005 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032016 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032026 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032035 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032045 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032055 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032066 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032076 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032085 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032096 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032105 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032116 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032127 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032137 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032148 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032158 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032166 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032174 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032185 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032192 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032200 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032208 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032216 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032224 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032232 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032240 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.032252 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.037106 4739 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.053709 4739 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.053873 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074346 4739 server.go:997] "Starting client certificate rotation" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074411 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074610 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 01:32:44.06229505 +0000 UTC Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074693 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.213506 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.216976 4739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.217272 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.238174 4739 log.go:25] "Validated CRI v1 runtime API" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.277809 4739 log.go:25] "Validated CRI v1 image API" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.280345 4739 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.286495 4739 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-13-54-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.286536 4739 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.308648 4739 manager.go:217] Machine: {Timestamp:2026-02-18 13:59:28.305111311 +0000 UTC m=+0.800832283 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d786f2bd-7712-4d82-a689-cbffdaab4e85 BootID:90b9be3f-f663-4169-ae17-5b48d37fe9e4 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:18:7d:56 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:18:7d:56 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:37:23:03 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:82:a1:66 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:aa:7e:f5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ce:fd:d8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c6:33:01:0d:97:a1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:dc:09:a3:cb:e2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.309023 4739 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.309212 4739 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.310913 4739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311179 4739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311237 4739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311658 4739 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311678 4739 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.312168 4739 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.312216 4739 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.313031 4739 state_mem.go:36] "Initialized new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.313188 4739 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.317914 4739 kubelet.go:418] "Attempting to sync node with API server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.317971 4739 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318032 4739 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318050 4739 kubelet.go:324] "Adding apiserver pod source" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318065 4739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.322625 4739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.323482 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.323487 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.323705 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.323727 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.323889 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.327739 4739 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329926 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329965 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329979 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329993 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330015 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330028 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330040 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330062 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330076 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330092 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330135 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330148 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.331642 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.332591 4739 server.go:1280] "Started kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.333606 4739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.333620 4739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.334202 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.334363 4739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 13:59:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.340544 4739 server.go:460] "Adding debug handlers to kubelet server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.342603 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.344752 4739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.345431 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:33:20.828108439 +0000 UTC Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.345811 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346033 4739 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346084 4739 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346268 4739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347420 4739 factory.go:55] Registering systemd factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347475 4739 factory.go:221] Registration of the systemd container factory successfully Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.345591 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18955bfc775648fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 13:59:28.332187902 +0000 UTC m=+0.827908864,LastTimestamp:2026-02-18 13:59:28.332187902 +0000 UTC m=+0.827908864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347922 4739 factory.go:153] Registering CRI-O factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347941 4739 factory.go:221] Registration of the crio container factory successfully Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348036 4739 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348069 4739 factory.go:103] Registering Raw factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348095 4739 manager.go:1196] Started watching for new ooms in manager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.348197 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.348280 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348876 4739 manager.go:319] Starting recovery of all containers Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.346328 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357049 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357113 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357136 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357154 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357174 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357193 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357211 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357229 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357249 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357269 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357290 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357310 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357331 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357353 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357373 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357394 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357413 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357433 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357561 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357582 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357601 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357639 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357659 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357680 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357702 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357723 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357747 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357770 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357790 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357809 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357828 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357848 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357886 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357905 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357943 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357963 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357983 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358003 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358022 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358043 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358062 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358083 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358103 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358123 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358143 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358162 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358182 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358201 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358231 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358252 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358280 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358305 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358328 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358350 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358371 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358392 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358410 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358430 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358477 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358497 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358518 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358538 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358559 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358580 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358599 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358617 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358637 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358657 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358676 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358695 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358715 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358736 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358755 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358775 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358794 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358813 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358834 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358854 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358873 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358892 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358919 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358937 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358956 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358976 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358995 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359013 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359032 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359051 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359071 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359110 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359129 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359149 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359170 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359190 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359208 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359227 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359247 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359266 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359286 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363367 4739 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363397 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363414 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363435 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363468 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363481 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363494 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363509 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363521 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363535 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363568 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363590 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363604 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363616 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363628 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363641 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363652 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363663 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363675 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363686 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363698 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363710 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363732 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363745 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363756 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363769 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363781 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363793 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363806 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363819 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363830 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363842 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363855 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363879 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363891 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363903 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363915 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363927 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363940 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363952 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363964 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363976 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363987 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364000 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364022 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364034 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364046 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364057 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364069 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364080 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364105 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364119 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364131 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364142 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364154 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364167 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364179 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364191 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364203 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364214 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364225 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364286 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364298 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364310 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364325 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364337 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364349 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364360 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364371 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364383 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364393 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364405 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364416 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364427 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364438 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364465 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364476 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364486 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364498 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364695 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364707 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364718 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364731 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364742 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364754 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364765 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364776 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364789 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364800 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364811 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364822 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364833 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364844 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364855 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364878 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364891 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364902 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364912 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364935 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364946 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364957 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364969 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364980 4739 reconstruct.go:97] "Volume reconstruction finished" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364988 4739 reconciler.go:26] "Reconciler: start to sync state" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.384184 4739 manager.go:324] Recovery completed Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.399126 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401778 4739 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401797 4739 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401818 4739 state_mem.go:36] "Initialized new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.407196 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409027 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409074 4739 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409107 4739 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.409156 4739 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.409690 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.409774 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.419093 4739 policy_none.go:49] "None policy: Start" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.420421 4739 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.420498 4739 state_mem.go:35] "Initializing new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.446542 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473110 4739 manager.go:334] "Starting Device Plugin manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473209 4739 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473227 4739 server.go:79] "Starting device plugin registration server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473771 4739 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473795 4739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474177 4739 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474285 4739 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474302 4739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.483186 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.510066 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.510187 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511724 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512250 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512833 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512981 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513052 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514066 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514384 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515863 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516017 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516089 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517217 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.549690 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566763 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566809 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.574709 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575761 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.576100 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669390 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669785 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669844 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669867 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670291 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670200 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670280 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.776264 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778361 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.778930 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.845643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.863173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.871684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.891616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.900193 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.951677 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.029053 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef WatchSource:0}: Error finding container 181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef: Status 404 returned error can't find the container with id 181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.043553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277 WatchSource:0}: Error finding container 0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277: Status 404 returned error can't find the container with id 0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.046554 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22 WatchSource:0}: Error finding container ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22: Status 404 returned error can't find the container with id ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.049587 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032 WatchSource:0}: Error finding container c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032: Status 404 returned error can't find the container with id c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.057013 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e WatchSource:0}: Error finding container 20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e: Status 404 returned error can't find the container with id 20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.138429 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.138605 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.180050 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181394 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.181846 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.187557 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.187648 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.335665 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.346696 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:52:48.334024991 +0000 UTC Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.413683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.414532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.415466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.416720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.417599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef"} Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.455731 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.455829 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.586654 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.586748 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.752689 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.982643 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984857 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.985432 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.334241 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:30 crc kubenswrapper[4739]: E0218 13:59:30.335215 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.335252 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.347814 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:53:39.705498647 +0000 UTC Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424202 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424363 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428264 4739 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428397 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438120 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438256 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441315 4739 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441495 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.445400 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: W0218 13:59:31.196368 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.196497 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.335178 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.348353 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:55:57.882146073 +0000 UTC Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.353798 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.446646 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.446638 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448494 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5" exitCode=0 Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448630 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453274 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453289 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453309 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455277 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455912 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc"} Feb 18 13:59:31 crc kubenswrapper[4739]: W0218 13:59:31.563219 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.563286 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.586014 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587178 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.587579 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.647897 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.658062 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.715911 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.349416 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:35:10.631636996 +0000 UTC Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462731 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85" exitCode=0 Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85"} Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462986 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469066 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469217 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469491 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469601 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8"} Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.471669 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.788720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.349772 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:51:11.172308205 +0000 UTC Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.477564 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478290 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478428 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478467 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478512 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478529 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.544320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.350067 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:34:12.212399689 +0000 UTC Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.416023 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.438115 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.488799 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.488889 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.489658 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9"} Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.788289 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790303 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.351808 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:31:36.428735494 +0000 UTC Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.491498 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.491522 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:36 crc kubenswrapper[4739]: I0218 13:59:36.352752 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:04:26.685149407 +0000 UTC Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.353642 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:32:15.765564484 +0000 UTC Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.721699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.721907 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.000701 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.354381 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:18:25.834287587 +0000 UTC Feb 18 13:59:38 crc kubenswrapper[4739]: E0218 13:59:38.483275 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.500134 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.354562 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:43:26.023243226 +0000 UTC Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.383793 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.384108 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.355279 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:49:37.419190428 +0000 UTC Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.779973 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.780207 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.001716 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.001807 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.355953 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:14:13.275266949 +0000 UTC Feb 18 13:59:42 crc kubenswrapper[4739]: W0218 13:59:42.097329 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.097501 4739 trace.go:236] Trace[1638822830]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:32.095) (total time: 10001ms): Feb 18 13:59:42 crc kubenswrapper[4739]: Trace[1638822830]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:59:42.097) Feb 18 13:59:42 crc kubenswrapper[4739]: Trace[1638822830]: [10.001458562s] [10.001458562s] END Feb 18 13:59:42 crc kubenswrapper[4739]: E0218 13:59:42.097541 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.336227 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.356697 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:37:36.90987619 +0000 UTC Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.367517 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.367582 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.377758 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.377834 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.796294 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]log ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]etcd ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-filter ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiextensions-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiextensions-controllers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/crd-informer-synced ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-system-namespaces-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 18 13:59:42 crc kubenswrapper[4739]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/bootstrap-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-aggregator-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-registration-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-discovery-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]autoregister-completion ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-openapi-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: livez check failed Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.796377 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 13:59:43 crc kubenswrapper[4739]: I0218 13:59:43.357557 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:18:34.184837643 +0000 UTC Feb 18 13:59:44 crc kubenswrapper[4739]: I0218 13:59:44.358131 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:32:47.810128731 +0000 UTC Feb 18 13:59:45 crc kubenswrapper[4739]: I0218 13:59:45.358499 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:37:32.81525346 +0000 UTC Feb 18 13:59:46 crc kubenswrapper[4739]: I0218 13:59:46.193354 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:46 crc kubenswrapper[4739]: I0218 13:59:46.359332 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:48:53.358970116 +0000 UTC Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.359956 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:42:23.921281367 +0000 UTC Feb 18 13:59:47 crc kubenswrapper[4739]: E0218 13:59:47.361056 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363733 4739 trace.go:236] Trace[1416376946]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:35.680) (total time: 11682ms): Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[1416376946]: ---"Objects listed" error: 11682ms (13:59:47.363) Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[1416376946]: [11.682726099s] [11.682726099s] END Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363782 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363823 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363906 4739 trace.go:236] Trace[672147440]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:32.486) (total time: 14876ms): Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[672147440]: ---"Objects listed" error: 14876ms (13:59:47.363) Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[672147440]: [14.876959033s] [14.876959033s] END Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363937 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.364580 4739 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 13:59:47 crc kubenswrapper[4739]: E0218 13:59:47.367323 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.369440 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587591 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54350->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587654 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54350->192.168.126.11:17697: read: connection reset by peer" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587904 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54358->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587972 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54358->192.168.126.11:17697: read: connection reset by peer" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.726212 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.795656 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.796331 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.796468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.800191 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.242895 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.331912 4739 apiserver.go:52] "Watching apiserver" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.334510 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.334799 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.335207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335676 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.335776 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335912 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.336376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.336494 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.336615 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337331 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337747 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337861 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.338064 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.338102 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339042 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339144 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339169 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.340044 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.347656 4739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.360722 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:56:35.297977626 +0000 UTC Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372091 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372166 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372287 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372435 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372485 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372687 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372749 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372812 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372874 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372938 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373326 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373366 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373400 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373521 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373660 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373686 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373762 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373892 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374086 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374459 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374546 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374566 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374585 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374728 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374747 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374814 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374856 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374888 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374930 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374970 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374994 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375021 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375111 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375133 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375213 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375466 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375490 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375512 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375534 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375587 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375628 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375710 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375732 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375973 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377045 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377070 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377120 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377143 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377269 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377318 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377367 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377435 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377642 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377715 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377789 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377814 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377839 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377909 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378005 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378033 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374624 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375131 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375419 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375518 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375573 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375777 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375839 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376045 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376085 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376388 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377134 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377234 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377468 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378565 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377811 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378169 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378665 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378892 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379049 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380521 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380544 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380676 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381637 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381656 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381626 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381755 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381891 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381956 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382292 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382702 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383115 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.383832 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.883812188 +0000 UTC m=+21.379533130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384787 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384852 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384990 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385207 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385304 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385321 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385412 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385478 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385547 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385565 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385588 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385783 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385787 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385837 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385816 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386304 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386338 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386409 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386505 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386539 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387854 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387932 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387971 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388005 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390082 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390113 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390133 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390153 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390193 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390213 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390232 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390250 4739 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390270 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390290 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390308 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390327 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390346 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390364 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390383 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390403 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390421 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390440 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390485 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390506 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390525 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390544 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390563 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390581 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390601 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390639 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390658 4739 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390677 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390695 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390714 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390733 4739 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390753 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390774 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390792 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390811 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390830 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390852 4739 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390871 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390890 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390908 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390927 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390945 4739 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390963 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390985 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391003 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391021 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391039 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391058 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391077 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391096 4739 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391115 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391170 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391190 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391209 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391229 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391248 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391269 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391287 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391306 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391326 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391346 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391365 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391384 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391402 4739 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391420 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391467 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391487 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391506 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391525 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391546 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391565 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391584 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391601 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391640 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391658 4739 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391677 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391714 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391734 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391755 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391774 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391796 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391815 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391832 4739 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391852 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391871 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386187 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386238 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386454 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386650 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386808 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387050 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386561 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387795 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388226 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388339 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388496 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390189 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390152 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390274 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390520 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391100 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391164 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391174 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391976 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392050 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392117 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392363 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392621 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393327 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393988 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396227 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396633 4739 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.397908 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.397981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398097 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398147 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.898130933 +0000 UTC m=+21.393851855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398238 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398357 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.898323188 +0000 UTC m=+21.394044320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402111 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402310 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402507 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.406539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.406769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407688 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407935 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408395 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408827 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409183 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409483 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.412922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.413514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.413675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414387 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414410 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414423 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.914491158 +0000 UTC m=+21.410212160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414746 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.416001 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416693 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416726 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416743 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.416582 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416803 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.916781323 +0000 UTC m=+21.412502255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418338 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418733 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418816 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420912 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.421172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.421492 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.422791 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.423400 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.423980 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.425313 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.425884 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.426707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.426863 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.427479 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.429145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.429217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430293 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431646 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.432612 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.433546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.433722 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.435810 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.437277 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.438711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.438881 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.439315 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.440764 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.441423 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.444098 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.445371 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.446306 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.447997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.449319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.450319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.451437 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.452336 4739 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.452517 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.455358 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.456102 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.456775 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.457401 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.459372 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.460926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.460978 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.461311 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.462036 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.464170 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.465247 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.465772 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.466895 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.467564 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.468680 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.469207 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470215 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470842 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.472693 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.473268 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.474225 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.474699 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.475768 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.476330 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.476795 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.488477 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.494794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.494906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495019 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495041 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495081 4739 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495099 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495115 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495131 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495147 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495164 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495180 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495196 4739 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495213 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495229 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495245 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495262 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495277 4739 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495291 4739 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495307 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495323 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495338 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495353 4739 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495368 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495384 4739 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495401 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495416 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495432 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495469 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495484 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495498 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495513 4739 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495528 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495545 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495560 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495577 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495592 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495605 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495638 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495654 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495668 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495683 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495698 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495713 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495728 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495743 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495759 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495775 4739 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495791 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495806 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495821 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495835 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495851 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495867 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495882 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495897 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495912 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495927 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495943 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495959 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495990 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496004 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496027 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496043 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496058 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496073 4739 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496088 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496103 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496118 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496134 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496149 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496164 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496179 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496195 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496212 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496239 4739 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496271 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496286 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496302 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496317 4739 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496332 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496348 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496363 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496377 4739 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496391 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496405 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496420 4739 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496435 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496469 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496485 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496500 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496515 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496532 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496553 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496571 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496588 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496604 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496620 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496635 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496650 4739 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496666 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496684 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496700 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496716 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496731 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.497759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.499228 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.501592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.518141 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.528604 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.529906 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.530672 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" exitCode=255 Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.530749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8"} Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.535189 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.536747 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.536788 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.536994 4739 scope.go:117] "RemoveContainer" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.540463 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.550170 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.563859 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.578316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.589389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.598198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.608351 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.621029 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.631633 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.644609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.664278 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.664506 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.665646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.673159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.685490 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.707097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.724053 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901403 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901495 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901527 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.901506898 +0000 UTC m=+22.397227830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901551 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.901538339 +0000 UTC m=+22.397259261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901570 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.90156001 +0000 UTC m=+22.397280932 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.002572 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.002629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002765 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002785 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002797 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002834 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002875 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002892 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002853 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:50.002837001 +0000 UTC m=+22.498557923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002989 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:50.002956364 +0000 UTC m=+22.498677476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.360831 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:49:39.410509504 +0000 UTC Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.534214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"916e8f95206be7dd9856b3f6fe2498277be5c1b911a349bfcbfef0acce91881c"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.535755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.535792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"273f18efd8e25c48124c4936031339dd4aeff5030e9a2a2a97203bf534b02802"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537680 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"99d2370b0ab8bca0dbc31de2ec404ccc2969b9becd4a7f878ce9d6eca641de44"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.540412 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.542705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.558485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.569960 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.579516 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.601254 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.616022 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.631532 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.642722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.654574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.664690 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.676530 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.688230 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.701617 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.716105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.726084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.735054 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.744037 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913022 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913072 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913034273 +0000 UTC m=+24.408755235 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913125 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913105924 +0000 UTC m=+24.408826886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913134 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913242 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913215387 +0000 UTC m=+24.408936349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.014040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.014120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014227 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014235 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014281 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014242 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014296 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014309 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014360 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:52.014346745 +0000 UTC m=+24.510067667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014374 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:52.014368376 +0000 UTC m=+24.510089298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.361784 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:01:45.614718423 +0000 UTC Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410500 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.410941 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.410767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.411071 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.421827 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.422554 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.424082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.424770 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.426031 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.426580 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.545615 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.831667 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.849318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.851284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.854020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.869004 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.884424 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.898700 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.917894 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.932415 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.944280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.955658 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.971704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.986672 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.000319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.016882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.030273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.050784 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.074041 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.088181 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.101778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.362260 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:20:34.706136579 +0000 UTC Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.550125 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978"} Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.559588 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.575217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.625243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.639236 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.651790 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.662678 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.674505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.686486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.699288 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.711759 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.928915 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.929003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.929060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929097 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929066124 +0000 UTC m=+28.424787056 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929207 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929225 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929272 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929252948 +0000 UTC m=+28.424973920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929285239 +0000 UTC m=+28.425006291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.030283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.030400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030525 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030574 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030598 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030601 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030632 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030653 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030690 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:56.030663273 +0000 UTC m=+28.526384235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030726 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:56.030708084 +0000 UTC m=+28.526429046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.362380 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:40:47.029461354 +0000 UTC Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410052 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410189 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410519 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.362738 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:04:36.04391963 +0000 UTC Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.767493 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769913 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.796689 4739 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.796792 4739 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.823420 4739 csr.go:261] certificate signing request csr-w9vpp is approved, waiting to be issued Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.845046 4739 csr.go:257] certificate signing request csr-w9vpp is issued Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.890501 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.910021 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.910578 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mdk59"] Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.910873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.913420 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.913755 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917330 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.929786 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.936122 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939836 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.954767 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961558 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.965198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.974393 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.974605 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.976006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.976019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.980820 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.002803 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.019145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.033415 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.044876 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.047136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.047201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.056623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.069502 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077917 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.085383 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148127 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.165342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180490 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.222546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: W0218 13:59:54.235865 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef364cd3_8b0e_4ebb_96a9_f660f4dd776a.slice/crio-0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a WatchSource:0}: Error finding container 0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a: Status 404 returned error can't find the container with id 0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.323054 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h9slg"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.323952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.326574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.326951 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.330004 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.330400 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.333728 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.363015 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:40:22.215901014 +0000 UTC Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.371220 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390246 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.396727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.409671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409911 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.409962 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.410117 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.418768 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.439381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.448981 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450421 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450548 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450567 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.468964 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.483363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492276 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492307 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.499991 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.518514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.547860 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551256 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551470 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551638 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551764 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551790 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551792 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.552028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.552293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.558762 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mdk59" event={"ID":"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a","Type":"ContainerStarted","Data":"b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.558800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mdk59" event={"ID":"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a","Type":"ContainerStarted","Data":"0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.569968 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.576741 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.597581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.617316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.634150 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.636613 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.650750 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.665948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.680954 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.701031 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.722169 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.740895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.754628 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ltvvj"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.755584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757174 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mc7b4"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757593 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757813 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.758977 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.760076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.763884 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.764115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.764664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766972 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767094 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767669 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767897 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.784850 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.803276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.833832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.846342 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 13:54:53 +0000 UTC, rotation deadline is 2026-12-03 00:27:54.208291201 +0000 UTC Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.846396 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6898h27m59.361897159s for next certificate rotation Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.851494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855893 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856032 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856368 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856586 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856606 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856696 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856792 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.862146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.880748 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.901681 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.915721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.946397 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957947 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957963 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957979 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957992 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958035 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958130 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958285 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958897 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.959373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.959620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960697 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961582 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961672 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961706 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961833 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961873 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.962245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.965553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.966172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.966509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.980115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.980318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.981752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.982099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.997412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.014607 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.026356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.040496 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.057631 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.066870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.074238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.079724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:55 crc kubenswrapper[4739]: W0218 13:59:55.085135 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947a1bc9_4557_4cd9_aa90_9d3893aad914.slice/crio-86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747 WatchSource:0}: Error finding container 86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747: Status 404 returned error can't find the container with id 86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747 Feb 18 13:59:55 crc kubenswrapper[4739]: W0218 13:59:55.101348 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf04e1fa3_4bb9_41e9_bf1d_a2862fb63224.slice/crio-994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5 WatchSource:0}: Error finding container 994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5: Status 404 returned error can't find the container with id 994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5 Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.105084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.105149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208399 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311582 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.363529 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:13:56.267082403 +0000 UTC Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.416002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518668 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564161 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" exitCode=0 Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564248 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.566040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.566082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"d88ffc1d0a6f92570ad7561edcb514a76ecb11d8d9b6417ba255e803be63ca80"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.567923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.567969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"059d35ee1e8ad1f1ba1bb06bc8bad03ac79364e9a893a83f833ab5f10df7108f"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.580746 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.596171 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.610812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.623904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.643661 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.658701 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.683908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.705919 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.719916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723500 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.733755 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.749543 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.762037 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.774570 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.789394 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.807832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.821227 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.826033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.826098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.834204 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.845931 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.861282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.874688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.885851 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.898722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.911672 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.925243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.944714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.958067 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968536 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968695 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.968667827 +0000 UTC m=+36.464388759 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968774 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968825 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.96881264 +0000 UTC m=+36.464533562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968889 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968956 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.968942453 +0000 UTC m=+36.464663385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.970856 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.991706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.031782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.031977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.070088 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.070169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070323 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070355 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070376 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070475 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:04.07042709 +0000 UTC m=+36.566148042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070591 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070658 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070716 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070814 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:04.070794539 +0000 UTC m=+36.566515461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134786 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237765 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.364512 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:55:06.307759645 +0000 UTC Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409914 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410136 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577259 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577343 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.578663 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0" exitCode=0 Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.578685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.592519 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.607328 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.622244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.632853 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.643720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651140 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.665160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.681143 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.701667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.715704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.731760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.742195 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753681 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.754720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.766102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.777409 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.855982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856542 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958562 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.973690 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-p98v4"] Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.974222 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976389 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976813 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.977923 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.990981 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.003984 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.022826 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.038431 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.051699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.079895 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.079956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.080007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.080674 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.098545 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.114394 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.130834 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.145252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.162882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164499 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.179680 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181220 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.182659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.191935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.201942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.214740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.230104 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.286085 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: W0218 13:59:57.301545 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ef6462_8149_4976_b2f8_26123d8081ee.slice/crio-36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758 WatchSource:0}: Error finding container 36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758: Status 404 returned error can't find the container with id 36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758 Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.365337 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:30:12.230622366 +0000 UTC Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.370999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.475941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.475993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476033 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.582938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p98v4" event={"ID":"15ef6462-8149-4976-b2f8-26123d8081ee","Type":"ContainerStarted","Data":"36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.584908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.588728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.601476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.616346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.632999 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.646360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.659485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.672314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.692510 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.712136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.734850 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.747977 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.764251 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.783263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.794793 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.806311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.824223 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888385 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993493 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.056159 4739 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.301011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.301026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.366381 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:47:45.99613088 +0000 UTC Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404295 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409951 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410310 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410113 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409946 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410490 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.427225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.451334 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.469655 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.481211 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.508924 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.521871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.532880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.546628 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.562113 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.577490 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.592360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.595715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p98v4" event={"ID":"15ef6462-8149-4976-b2f8-26123d8081ee","Type":"ContainerStarted","Data":"d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.597535 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b" exitCode=0 Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.597599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.605377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.625835 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.643102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.658239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.675736 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.686017 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.705124 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711605 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.718960 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.733292 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.744119 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.756245 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.769692 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.784423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.799644 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.814375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.828179 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.839290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.851984 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.874988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021121 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226856 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226912 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226922 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.329010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.367621 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:35:12.955593666 +0000 UTC Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.608428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.611922 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148" exitCode=0 Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.612000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.627935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.643128 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.657195 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.668517 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.683740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.702573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.722279 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.741007 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.756182 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.769401 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.784516 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.811080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.835831 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.843000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.852155 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.865683 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946097 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153785 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256846 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.367891 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:48:24.829793243 +0000 UTC Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410247 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410436 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410541 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.618441 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c" exitCode=0 Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.618513 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.648914 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.663003 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.674580 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.686644 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.700166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.711985 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.724808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.736323 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.749832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.763533 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772305 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.780022 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.791027 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.800696 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.811537 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.828260 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978243 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.287747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288146 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.368598 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:32:36.426601509 +0000 UTC Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.392728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.392924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394526 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601596 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.630759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.656806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.675266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.676582 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.676643 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.691209 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705177 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.708134 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.738613 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.752682 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.764608 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.790689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808231 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.826086 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.844967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.863488 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.877252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.903889 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910177 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.926733 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.937508 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.949694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.960871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.972327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.991573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.013990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014095 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.025764 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.039912 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.052281 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.065162 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.077097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.097222 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.114103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.116969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117052 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.131410 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.147583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.163605 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.176547 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.369770 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:26:53.140437306 +0000 UTC Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411033 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411209 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411359 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.645688 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.732990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733046 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835257 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042759 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146561 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.370862 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:00:57.495921341 +0000 UTC Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.455991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456066 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.551342 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.558969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.571060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.587560 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.604402 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.622571 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.637710 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653616 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578" exitCode=0 Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653751 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.654536 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.663993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.686900 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.711577 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.729143 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.742565 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.755861 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767368 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.770375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.786675 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.798706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.814948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.828249 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.840717 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.854557 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.883547 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.896132 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.915470 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.927572 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.937303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.958165 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973825 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.988006 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.998598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.058336 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.067827 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.067971 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068040 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.067993699 +0000 UTC m=+52.563714621 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068064 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.068089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068108 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.068095012 +0000 UTC m=+52.563815934 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068192 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068229 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.068221165 +0000 UTC m=+52.563942087 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.073258 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.075960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.144967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145963 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.160300 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.169459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.170357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170573 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170604 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170621 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170695 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.170675305 +0000 UTC m=+52.666396237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171057 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171080 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171091 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171125 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.171111655 +0000 UTC m=+52.666832597 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.185577 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.211423 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216484 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216548 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216593 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.233934 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237820 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.251382 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.251550 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.253769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254468 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357558 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.371558 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:14:06.041126619 +0000 UTC Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.409988 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.410042 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.410117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410329 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410433 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.563582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564377 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.661121 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e" exitCode=0 Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.661179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666925 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.681624 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.696459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.711494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.724535 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.751217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.765760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.769009 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.775782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.787704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.799581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.811766 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.823333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.835285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.849486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.863137 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.870901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871433 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.877793 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973433 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180128 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.282538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.282635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.372935 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:05:37.306191861 +0000 UTC Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.386017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489528 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.668226 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.689787 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.706404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.720930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.738559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.756177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.766856 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.793380 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.813757 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.831217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.848962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.874610 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.887726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.898539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.911246 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.924686 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207222 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.310010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.373713 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:23:21.598073729 +0000 UTC Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409403 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409834 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413222 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515229 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.620645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827172 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136891 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240244 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343660 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.394156 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:16:14.391688026 +0000 UTC Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445745 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.573375 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr"] Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.574229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.576393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.578031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.604973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.622849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.638112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650921 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.651899 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.663392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.677061 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.678359 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/0.log" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.680687 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" exitCode=1 Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.680725 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.681357 4739 scope.go:117] "RemoveContainer" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.704094 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.717087 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.731877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.743093 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.762694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.775259 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.787245 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.798046 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.806047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.806265 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.807008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.814983 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.816425 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.832284 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.835308 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.854116 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.873881 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.889974 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.896206 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.902895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: W0218 14:00:07.911401 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdde800e_9fbf_44dc_af43_d9cfc15dfecd.slice/crio-005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16 WatchSource:0}: Error finding container 005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16: Status 404 returned error can't find the container with id 005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16 Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.919456 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.930715 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.944734 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958560 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.960427 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.971795 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.984838 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.997988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.014729 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.025432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.045930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061833 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.072214 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163890 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265876 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.394595 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:22:05.387590253 +0000 UTC Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.409948 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.410035 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410064 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.410116 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410182 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410222 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.431730 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.469070 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.517767 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.537785 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.550166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.561670 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.571632 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.584651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.596908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.611052 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.622704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.634080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.650406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.671643 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675818 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.676655 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.677306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.677402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.686896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.687430 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/0.log" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.688101 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691557 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" exitCode=1 Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691660 4739 scope.go:117] "RemoveContainer" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.692294 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.692427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694193 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.703118 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.716121 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.726559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.748892 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.763678 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.776097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.786809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.798463 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.815476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.819920 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.820016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.827861 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.838669 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.853759 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.870371 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879985 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.882267 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.898783 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.912127 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.921799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.921853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.921988 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.922033 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:09.422018327 +0000 UTC m=+41.917739239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.942965 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.943271 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.961803 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.085985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086124 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.394822 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:09:56.359530018 +0000 UTC Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395820 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.428325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:09 crc kubenswrapper[4739]: E0218 14:00:09.428546 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:09 crc kubenswrapper[4739]: E0218 14:00:09.428661 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:10.42863872 +0000 UTC m=+42.924359682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498753 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.698393 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702871 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806786 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.908930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.908988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909048 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011104 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.395556 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:48:23.169473554 +0000 UTC Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410144 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410398 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410503 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.437199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.437345 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.437436 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:12.437417909 +0000 UTC m=+44.933138831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.587589 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.588903 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.589210 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.609724 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.625145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629585 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.639928 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.675412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.690585 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.717949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.731464 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.733014 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.756949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.776836 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.788858 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.803958 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.819640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.835146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836509 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.853252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.868183 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.886288 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.900345 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.938993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939785 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042722 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145164 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.396247 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:53:14.756076496 +0000 UTC Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.760933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.865005 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968198 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174405 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276466 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.397181 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:15:12.754851634 +0000 UTC Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409646 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409769 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409975 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.410070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.471329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.471518 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.471886 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:16.471867664 +0000 UTC m=+48.967588586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481998 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791666 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.894013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998850 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204154 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307210 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.398231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:54:25.081633871 +0000 UTC Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410142 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616287 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821828 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.924945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.130951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.130996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131028 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337751 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.356016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.356038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.377533 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.398963 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:02:36.669848772 +0000 UTC Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.403995 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.408978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409520 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409620 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409654 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409756 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.410071 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.427154 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.450566 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455607 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.469350 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.469545 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677993 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884128 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090930 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193484 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295733 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.398979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399109 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:44:10.342885834 +0000 UTC Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.604543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.604931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605153 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.708772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.812916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.812982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916662 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019800 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122816 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225320 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328645 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.399649 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:32:01.804771739 +0000 UTC Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410021 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410160 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.410395 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410947 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411165 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411293 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.515192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.515505 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.515614 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:24.515586527 +0000 UTC m=+57.011307489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534668 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.639112 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742189 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.949010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.400845 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:17:26.935431723 +0000 UTC Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467588 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673323 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776626 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879894 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983555 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189651 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.402042 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:05:50.053467024 +0000 UTC Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409592 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409630 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.409704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.409983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.410162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.410250 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.430472 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.444699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.457819 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.483393 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498388 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.502521 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.526731 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.540614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.552934 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.566400 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.580358 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.592242 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600461 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.602937 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.616662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.629349 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.641334 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.660778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.674212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913429 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016562 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222776 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325416 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.402557 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:13:04.532160355 +0000 UTC Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.530972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531122 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634350 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738237 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841312 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.845020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.860987 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.881590 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.904649 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.921357 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.939812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.962289 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.977699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.992226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.008877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.023700 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.039916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.064990 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.079514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.097762 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.112389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.126652 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.148145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151350 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153716 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153792 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153831 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.153818836 +0000 UTC m=+84.649539758 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153950 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.153943679 +0000 UTC m=+84.649664591 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.154002 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.154021 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.15401578 +0000 UTC m=+84.649736702 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.166618 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254196 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254321 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254337 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254334 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254376 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254392 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254350 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.254430751 +0000 UTC m=+84.750151763 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254525 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.254504633 +0000 UTC m=+84.750225665 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254548 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357244 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.402919 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:07:43.778554313 +0000 UTC Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410345 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410590 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410882 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460900 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.563967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283922 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.404079 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:23:08.351498249 +0000 UTC Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.489975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490057 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798919 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212811 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315246 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.404655 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:54:45.23368143 +0000 UTC Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410182 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.410358 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.410644 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.411089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.411195 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829376 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932839 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138806 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241268 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.404972 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:12:22.618446957 +0000 UTC Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446156 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859837 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962911 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.064952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.064997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065039 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168340 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.405364 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:11:51.842736135 +0000 UTC Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409612 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409674 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.409847 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410158 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410619 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.410920 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.604046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.604218 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.604294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:40.604271334 +0000 UTC m=+73.099992286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692888 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.756855 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.761761 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.763116 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.783100 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796648 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.807858 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808740 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.829052 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.829642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.885225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.885330 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.915832 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.930423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.945779 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.946373 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950957 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.967547 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.967656 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969212 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.973146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.985951 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.998272 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.009045 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.023406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.034721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.045855 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.056185 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.069103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.086173 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.102559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.113291 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173982 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.406401 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:40:05.566871338 +0000 UTC Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.585983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586084 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689564 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.768614 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.769360 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773013 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" exitCode=1 Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773115 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.774258 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:25 crc kubenswrapper[4739]: E0218 14:00:25.776584 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.790864 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792282 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.805283 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.818438 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.834953 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.866877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.884862 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895576 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.915689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.930939 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.940812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.956404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.970711 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.983187 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.994188 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998599 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.008180 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.020218 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.034212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.045528 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.053652 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204736 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.307679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.307990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.406584 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:37:39.122772126 +0000 UTC Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409581 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409772 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409655 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409610 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410169 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410370 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410619 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410761 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.411003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.411020 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.514363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.514825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515343 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.618926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.724144 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.779736 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.785031 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.785477 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.803763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.823911 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.841640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.861491 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.881805 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.900930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.918840 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.930663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.930902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.941375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.957140 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.975576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.995158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.007868 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.023377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.035954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.056574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.070782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.095130 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.116571 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.129969 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.138969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139074 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.241940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242052 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.407725 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:16:05.309302459 +0000 UTC Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448390 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551156 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.861184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.861342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067601 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.375649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376312 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.408015 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:53:46.993999686 +0000 UTC Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409407 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409516 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409659 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409809 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409960 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.410073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.437006 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.454830 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.470739 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.489988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.505163 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.545269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.562798 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.580202 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.598721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.614504 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.652137 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.677648 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684386 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.692162 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.705347 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.721804 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.736578 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.751015 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.765546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.303662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.407932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.408299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409290 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:29:26.959316227 +0000 UTC Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409504 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.512502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.512956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616439 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.822494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.822901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410125 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:17:04.047886194 +0000 UTC Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410334 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410382 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410406 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410490 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410738 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410797 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410657 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.647948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.647998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648040 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.957942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.957994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958035 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061678 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268356 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.370900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.370991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.410652 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:21:35.406133448 +0000 UTC Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.474000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.575998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678576 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780830 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883889 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.987025 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089732 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.193007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.295959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296091 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.409795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.409863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.410078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.410127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410241 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410329 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410460 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.411098 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:52:38.566043259 +0000 UTC Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500711 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.705952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.705997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012979 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.412075 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:25:14.628610914 +0000 UTC Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.525986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732847 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937817 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.039683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.039947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244816 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.409626 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.409764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.410290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.410197 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.412777 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:16:10.107609734 +0000 UTC Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448963 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653412 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755935 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960543 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.036314 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.057672 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.061010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.061023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.075317 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.079004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.091529 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.105886 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.105992 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.413587 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:36:34.36123046 +0000 UTC Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641934 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745319 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949628 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.051944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.051997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052035 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360181 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409604 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409631 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409827 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410166 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410239 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410280 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.413685 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:52:09.572906847 +0000 UTC Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.462914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.462974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463171 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772276 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874990 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080695 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.185087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.185253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.391986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392100 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.414800 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:46:08.575717429 +0000 UTC Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598228 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803218 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905775 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.409698 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.409790 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409845 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.410144 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.411717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.414919 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:41:09.385098714 +0000 UTC Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.422904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.435107 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.452433 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.465727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.476395 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.486316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.499750 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.515168 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518695 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.532235 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.544653 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.557814 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.566707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.576129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.585320 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.593728 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.608368 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.618664 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621326 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.628478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724178 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826537 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.133960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236538 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.338958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.415505 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:37:06.327437664 +0000 UTC Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441466 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544579 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647532 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750682 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955858 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057588 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159667 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365623 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410164 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410248 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410519 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410668 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.411280 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.411425 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.415731 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:54:12.728179733 +0000 UTC Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570215 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.670382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.670542 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.670921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:01:12.670898104 +0000 UTC m=+105.166619046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876514 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.082973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083243 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.185986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288167 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.416764 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:35:42.730398323 +0000 UTC Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495584 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598425 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.700963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701051 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803468 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906559 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.111969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112047 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214829 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214850 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409632 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409632 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409649 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409881 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.410146 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.416915 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:45:39.818271571 +0000 UTC Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420150 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.624997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.727915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.727990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831147 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831178 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933331 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.243005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.243016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.417911 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:23:08.604207077 +0000 UTC Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.549863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550345 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655614 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758287 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840351 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840395 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" exitCode=1 Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.842242 4739 scope.go:117] "RemoveContainer" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.852318 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.869928 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.884727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.900741 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.918432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.937105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.951244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.963967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.977940 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980797 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.992185 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.009964 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.023005 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.037949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.051955 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.074338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083512 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.105059 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.129546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.141239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288965 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409963 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409955 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410346 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410403 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.418423 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:19:21.237622149 +0000 UTC Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.494005 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.698976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.845268 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.845356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.856720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.867233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.878243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.892720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.901748 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904297 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.918478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.934904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.946397 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.954762 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.965776 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.003554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005837 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.025363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.038663 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.050692 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.061640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.077194 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.099409 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.158566 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.210984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211058 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418670 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:50:47.8518978 +0000 UTC Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.468598 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.486360 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490256 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.502924 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.523352 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.540249 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.540435 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.542009 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.644934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645055 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850434 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953517 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056362 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263174 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.409934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410381 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410504 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.419130 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:03:32.956348702 +0000 UTC Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467717 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570644 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673829 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.775959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879497 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982560 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085200 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188121 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.394018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.394043 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.419754 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:07:19.316949079 +0000 UTC Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600193 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704233 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115404 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320579 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.409882 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410000 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410072 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410088 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410142 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410093 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410225 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410272 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.420762 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:05:43.388258627 +0000 UTC Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422404 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.428825 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.445151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.462800 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.478139 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.494609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.512282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524548 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524574 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.529392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.544806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.574335 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.592239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.623399 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627902 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.643269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.657611 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.674265 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.692732 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.710511 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.725813 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730316 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.745849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.832913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.935922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.935990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.038688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.038986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.143080 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.247223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.247424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350624 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.421252 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:08:06.833244214 +0000 UTC Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.453771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073830 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176567 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409552 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409864 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409924 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.422372 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:37:32.44963749 +0000 UTC Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485713 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.588984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589118 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795309 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000704 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205802 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308867 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.423200 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:26:57.275205257 +0000 UTC Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617510 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.719973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720262 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030223 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.193992 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.194169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194278 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194239733 +0000 UTC m=+148.689960695 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194302 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.194383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194529 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194463268 +0000 UTC m=+148.690184210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194524 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194659 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194648103 +0000 UTC m=+148.690369035 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.295493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.295570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295653 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295674 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295686 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295742 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.29572683 +0000 UTC m=+148.791447752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295740 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295779 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295793 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295855 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.295837022 +0000 UTC m=+148.791557954 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409947 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409982 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.410497 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410541 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410622 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410849 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.411074 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.424198 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:53:28.662025902 +0000 UTC Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445918 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754131 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855976 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.872513 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.874590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.875644 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.886004 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.906614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.918002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.930293 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.942592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.959356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.970989 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.985401 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.000053 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.015462 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.027332 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.040744 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.053841 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.067735 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.086381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.103859 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.117186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.136584 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163491 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266687 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368895 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.422675 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.425198 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:40:14.544325039 +0000 UTC Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.677922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678268 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678405 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.880261 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.881541 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890558 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" exitCode=1 Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890711 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.892101 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:00:53 crc kubenswrapper[4739]: E0218 14:00:53.892586 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.910305 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.942878 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.961421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.972979 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.983294 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992555 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.001079 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.017177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.035136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.053121 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.073503 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.088799 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.094959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095074 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.105694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.125129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.136044 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.149998 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.177103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.192536 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197745 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.206869 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.226183 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410796 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.425749 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:12:10.104358458 +0000 UTC Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506599 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608822 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711908 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.894644 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.897888 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.898062 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.912580 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.926144 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.946957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.980898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.998096 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.012264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021760 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.030909 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.045937 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.079688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.105169 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124497 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.128583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.144098 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.168414 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.189158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.205372 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.221554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.247289 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.266062 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.280052 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329538 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.426277 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:09:54.38791185 +0000 UTC Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433238 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.637929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.880222 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884748 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.904911 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908927 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.928180 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.949101 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.964098 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.964325 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409378 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409439 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409614 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409751 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409955 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.427356 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:59:07.633339544 +0000 UTC Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480510 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583862 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687580 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203662 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.306020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.306038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410481 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.428232 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:33:31.360867868 +0000 UTC Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.616957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.720001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.237001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.339947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340115 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.409869 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410192 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410385 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410558 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.428566 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:38:47.845607318 +0000 UTC Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.428737 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.447638 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.469843 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.488556 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.511489 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.544599 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545765 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.560926 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.578828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.601264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.616364 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.633480 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649332 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.666161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.687598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.703898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.722992 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.739714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.752961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.762646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.782239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.799922 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.855991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959227 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166239 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.270003 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.429132 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:22:47.694989061 +0000 UTC Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577626 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.788941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789204 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.202002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407621 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.411614 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.411747 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.411965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412019 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.412120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412169 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.412263 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.429830 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:46:23.394490779 +0000 UTC Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.510515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.510910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.719013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923990 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.028022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.131007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.131025 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234139 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.338061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.430908 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:40:54.724329884 +0000 UTC Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443985 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.648980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.753006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.753017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958372 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061147 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164213 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267198 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410610 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410781 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410838 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.411073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.431586 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:44:08.554272092 +0000 UTC Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.474162 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.576826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886431 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.432274 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:28:22.250237621 +0000 UTC Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608321 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711674 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814115 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.123962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329887 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.409908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.409950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410040 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.410067 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410133 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.410175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410370 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410404 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.432395 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:30:47.777927803 +0000 UTC Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433214 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638987 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742109 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845800 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257693 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.432541 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:43:56.719541515 +0000 UTC Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462741 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668587 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771092 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.977017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.977029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031288 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.052067 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.073324 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.077961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078069 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.093281 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.116910 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.137355 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.138238 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.344708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410465 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410590 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410742 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410819 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410938 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.433188 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:32:09.278404774 +0000 UTC Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447373 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549887 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961503 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065155 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168268 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.433373 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:46:37.253462173 +0000 UTC Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786279 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889569 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197526 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.300026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.300049 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403752 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.410774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.410985 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411133 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411251 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411592 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.434120 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:35:22.116945532 +0000 UTC Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.473017 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mdk59" podStartSLOduration=75.47299088 podStartE2EDuration="1m15.47299088s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.472817245 +0000 UTC m=+100.968538217" watchObservedRunningTime="2026-02-18 14:01:08.47299088 +0000 UTC m=+100.968711842" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.502072 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h9slg" podStartSLOduration=74.50205181 podStartE2EDuration="1m14.50205181s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.489669142 +0000 UTC m=+100.985390104" watchObservedRunningTime="2026-02-18 14:01:08.50205181 +0000 UTC m=+100.997772732" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.558706 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" podStartSLOduration=74.558682905 podStartE2EDuration="1m14.558682905s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.557665381 +0000 UTC m=+101.053386333" watchObservedRunningTime="2026-02-18 14:01:08.558682905 +0000 UTC m=+101.054403847" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.597884 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.59786748 podStartE2EDuration="15.59786748s" podCreationTimestamp="2026-02-18 14:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.582737435 +0000 UTC m=+101.078458367" watchObservedRunningTime="2026-02-18 14:01:08.59786748 +0000 UTC m=+101.093588402" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.598228 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.598224979 podStartE2EDuration="1m21.598224979s" podCreationTimestamp="2026-02-18 13:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.59785177 +0000 UTC m=+101.093572722" watchObservedRunningTime="2026-02-18 14:01:08.598224979 +0000 UTC m=+101.093945901" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610316 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.620746 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podStartSLOduration=74.620727421 podStartE2EDuration="1m14.620727421s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.620482975 +0000 UTC m=+101.116203917" watchObservedRunningTime="2026-02-18 14:01:08.620727421 +0000 UTC m=+101.116448343" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.666785 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" podStartSLOduration=74.666769231 podStartE2EDuration="1m14.666769231s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.666767621 +0000 UTC m=+101.162488553" watchObservedRunningTime="2026-02-18 14:01:08.666769231 +0000 UTC m=+101.162490153" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.698094 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.698074926 podStartE2EDuration="49.698074926s" podCreationTimestamp="2026-02-18 14:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.682432299 +0000 UTC m=+101.178153261" watchObservedRunningTime="2026-02-18 14:01:08.698074926 +0000 UTC m=+101.193795868" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.710817 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p98v4" podStartSLOduration=75.710796453 podStartE2EDuration="1m15.710796453s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.710358802 +0000 UTC m=+101.206079754" watchObservedRunningTime="2026-02-18 14:01:08.710796453 +0000 UTC m=+101.206517385" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.732056 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.732034885 podStartE2EDuration="1m18.732034885s" podCreationTimestamp="2026-02-18 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.731066831 +0000 UTC m=+101.226787793" watchObservedRunningTime="2026-02-18 14:01:08.732034885 +0000 UTC m=+101.227755827" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.745386 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.745362396 podStartE2EDuration="1m21.745362396s" podCreationTimestamp="2026-02-18 13:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.74471782 +0000 UTC m=+101.240438762" watchObservedRunningTime="2026-02-18 14:01:08.745362396 +0000 UTC m=+101.241083358" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814914 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918541 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021811 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125190 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.227995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228125 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331183 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.411057 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:09 crc kubenswrapper[4739]: E0218 14:01:09.411360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434267 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:35:35.503884896 +0000 UTC Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.641013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.641027 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.847010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.847026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950209 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155807 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.259007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362714 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410042 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.410106 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410396 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410624 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.435153 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:32:09.778551616 +0000 UTC Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774926 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877412 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.979948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.082917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186575 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.435522 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:00:08.088607268 +0000 UTC Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600893 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808229 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911543 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013667 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.116045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.116145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.220103 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.322860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323900 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409487 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409563 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410222 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.409885 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410288 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.410360 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410474 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.426908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.426967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427078 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.436639 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:25:34.814857393 +0000 UTC Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529501 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.720153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.720360 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.720532 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:02:16.720439083 +0000 UTC m=+169.216160045 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942819 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.251970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252044 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354324 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.437512 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:56:10.888610682 +0000 UTC Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456541 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871259 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974301 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077724 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386848 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409601 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409739 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409838 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409966 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.410097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.437867 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:26:40.728225011 +0000 UTC Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592802 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798267 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.900988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.004016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.004030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312869 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.416936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.416991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417034 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.438896 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:48:20.132858156 +0000 UTC Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519808 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.726021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.726042 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829748 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933141 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.375058 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq"] Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.375685 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.378385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380008 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380435 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.410826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411042 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411333 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411423 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411786 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.412241 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.439857 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:50:08.322800461 +0000 UTC Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.439918 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.450271 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462512 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.463097 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.463206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564591 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564952 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.566876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.575610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.593629 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.700268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.997119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" event={"ID":"7c4ba118-afe3-4671-93a3-76c84f2bfcdf","Type":"ContainerStarted","Data":"9957416afab0fb79b3fec857960d4f1681be8e0c4aa09a862d135e93a0e60639"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.997228 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" event={"ID":"7c4ba118-afe3-4671-93a3-76c84f2bfcdf","Type":"ContainerStarted","Data":"7bc88fbbe4707321088da2378557fba0b9dc2706dfe57b74cb1194bdef3be1eb"} Feb 18 14:01:17 crc kubenswrapper[4739]: I0218 14:01:17.018237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" podStartSLOduration=83.01820987 podStartE2EDuration="1m23.01820987s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:17.017948354 +0000 UTC m=+109.513669336" watchObservedRunningTime="2026-02-18 14:01:17.01820987 +0000 UTC m=+109.513930822" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.409976 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.410039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.409980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.410099 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.411701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412429 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412654 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412786 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410459 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.410727 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410550 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.410899 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.411020 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.411081 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410370 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410311 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.410617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.410745 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.411290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.411387 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.411908 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.412184 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411069 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410518 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411340 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411662 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.409964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410096 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410253 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.409845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.409983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.410145 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.410336 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.422958 4739 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.537687 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.040987 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.041824 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.041967 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" exitCode=1 Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67"} Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042173 4739 scope.go:117] "RemoveContainer" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042797 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.043109 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410485 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410603 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:31 crc kubenswrapper[4739]: I0218 14:01:31.047509 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.409919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410059 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410292 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410489 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410555 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:33 crc kubenswrapper[4739]: E0218 14:01:33.538714 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410696 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410937 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.410914 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411032 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411120 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.412318 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.062802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.066927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.067632 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.126745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podStartSLOduration=101.126732052 podStartE2EDuration="1m41.126732052s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:35.125623035 +0000 UTC m=+127.621343977" watchObservedRunningTime="2026-02-18 14:01:35.126732052 +0000 UTC m=+127.622452974" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.403485 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.403616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:35 crc kubenswrapper[4739]: E0218 14:01:35.403754 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.410488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.410893 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.411204 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.411298 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.411558 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.411648 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:37 crc kubenswrapper[4739]: I0218 14:01:37.409268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:37 crc kubenswrapper[4739]: E0218 14:01:37.409478 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.409873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.409954 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.410583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.411856 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.411946 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.412047 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.539610 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:39 crc kubenswrapper[4739]: I0218 14:01:39.409745 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:39 crc kubenswrapper[4739]: E0218 14:01:39.409940 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410354 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410362 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410613 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410708 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410736 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.612122 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:01:41 crc kubenswrapper[4739]: I0218 14:01:41.410038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:41 crc kubenswrapper[4739]: E0218 14:01:41.410491 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:41 crc kubenswrapper[4739]: I0218 14:01:41.410682 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.093638 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.093694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1"} Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409614 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.409818 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.410037 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.410070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:43 crc kubenswrapper[4739]: I0218 14:01:43.409561 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:43 crc kubenswrapper[4739]: E0218 14:01:43.409755 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.409966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.410037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.410492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.412547 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.413057 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.413084 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.417248 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.410158 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.412176 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.413942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.191437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.238958 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.240162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.243721 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.245422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.245807 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.247309 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250014 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250506 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.251293 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.251756 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.253389 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.260849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.261656 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.261924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263352 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263383 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263606 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.264024 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.264684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.265523 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.265695 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266319 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.276073 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.279175 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.286904 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.287147 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.287696 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.288068 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.288782 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.293049 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.302733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.304623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.306942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.307103 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.315464 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316221 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316265 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316426 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316475 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316514 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.317946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318110 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318197 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318248 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.323350 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.323932 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324307 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324590 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324950 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325617 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326607 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326676 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327625 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327689 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328521 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328756 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329047 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329381 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329744 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.330019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.331403 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332034 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332426 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.333294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343305 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343343 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343800 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.344215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.347306 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.351773 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352296 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352322 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5cdhr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352644 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352708 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.355681 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.356471 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.360989 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.361570 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377112 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377493 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378064 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378319 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378499 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.379694 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.380371 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.381802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.382623 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383035 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383316 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383400 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383819 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383802 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.385871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.386425 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.386878 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.393022 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.397657 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.406497 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.406880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407356 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.408315 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.408516 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.409121 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.409988 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.412639 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.412811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.413458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.413942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.414122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.415571 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417263 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418824 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419056 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419133 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419211 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419226 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419302 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419363 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419412 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419553 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420051 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.423778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.425084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.428669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.431162 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434472 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434738 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434777 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434931 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.435041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436726 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.437238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.432112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.432169 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.441529 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.444969 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.445492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.445647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.446685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.447004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.448990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.449159 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.449266 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450540 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450684 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450793 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450997 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.451105 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.451340 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.452938 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.453094 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454345 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454378 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454491 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454615 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454721 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454806 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.455132 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.456718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.457287 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.461155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.479971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.480394 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.483874 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485400 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485595 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485753 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.486542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.487154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.492783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.495778 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.495879 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496876 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.498230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.498751 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.500042 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.502608 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.504113 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.504266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.505172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.505536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.506381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.508125 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.508236 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.509167 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513470 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513665 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.515263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.516055 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fjgwd"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.516860 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.517468 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.518390 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.519487 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.519612 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.521596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522589 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522604 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522734 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522860 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522937 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523673 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523923 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.524010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.524927 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.525290 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.525717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526250 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.527326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.527366 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.530016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.530568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.531313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.532361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.532589 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.534169 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.535728 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.536883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.537996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.539432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.541271 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.541885 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.543379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.545671 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.548827 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.549864 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.551066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.551189 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.552126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.552833 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.553921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.554993 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.556068 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.557289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.558331 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.559424 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.560841 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.562777 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.562921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.564800 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.565885 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.568966 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.571949 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.582261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.602273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.622253 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.642558 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.662353 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.682456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.703957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.722958 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.742478 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.763046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.790494 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.799228 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.803437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.822955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.828475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.843302 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.844184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.862889 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.869683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.882349 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.890503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.902499 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.923168 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.942379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.962603 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.982031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.003835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.023654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.043363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.062238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.069703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.083215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.103738 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.123846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.143016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.151804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.162887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.182664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.203417 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.223109 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.228702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.243041 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.263766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.283409 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.287049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.304553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.322534 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.342853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.363616 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.383683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.388932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.416018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.421647 4739 request.go:700] Waited for 1.014307486s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.423560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.427838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.443655 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.464417 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.482432 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.503908 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.523826 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525785 4739 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525839 4739 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525895 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert podName:3440ceb6-cf9c-4732-bafb-8a58d419276a nodeName:}" failed. No retries permitted until 2026-02-18 14:01:49.02586516 +0000 UTC m=+141.521586152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert") pod "service-ca-operator-777779d784-zwjnk" (UID: "3440ceb6-cf9c-4732-bafb-8a58d419276a") : failed to sync secret cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config podName:3440ceb6-cf9c-4732-bafb-8a58d419276a nodeName:}" failed. No retries permitted until 2026-02-18 14:01:49.025910251 +0000 UTC m=+141.521631193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config") pod "service-ca-operator-777779d784-zwjnk" (UID: "3440ceb6-cf9c-4732-bafb-8a58d419276a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.542372 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.563305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.582998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.603096 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.622277 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.643292 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.682723 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.703432 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.723594 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.742840 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.773415 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.782703 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.802399 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.822721 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.842482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.863008 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.882415 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.903764 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.923461 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.943536 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.989699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.013573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.026604 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.040770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.041117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.041915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.046746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.049424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.064182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.071980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.081545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.083761 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.084898 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.103936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.118114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.123805 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.141649 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.142984 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.177469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.181229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.202851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.225066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.245357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.265238 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.283231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.302622 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.324626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.368974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.385815 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.400546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.419250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.439274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.442563 4739 request.go:700] Waited for 1.916968013s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.453127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.453589 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.457903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.460670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.465335 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.476771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.479607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.495772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.521615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.541481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.550281 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.562201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.563466 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.576760 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a73ee03_bb76_478c_bcd1_2d08f0e6f538.slice/crio-c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a WatchSource:0}: Error finding container c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a: Status 404 returned error can't find the container with id c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.583429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.603795 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.622725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.622869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.623389 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.638798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.643207 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.654167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.659090 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.668328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.673207 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.685305 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.698201 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc43a59b1_306c_4a0e_9f9f_fad2e9082d55.slice/crio-6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df WatchSource:0}: Error finding container 6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df: Status 404 returned error can't find the container with id 6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.698996 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.705073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.712158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52fa7608_a369_4813_8a4d_3e2f8b84c885.slice/crio-983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72 WatchSource:0}: Error finding container 983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72: Status 404 returned error can't find the container with id 983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.714566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.731934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.738858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.744467 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88dbdf9_f0d5_44e2_91c8_6bcc8a6e3713.slice/crio-1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80 WatchSource:0}: Error finding container 1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80: Status 404 returned error can't find the container with id 1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751708 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751787 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751924 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752004 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752161 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752219 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752436 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752496 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752711 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752817 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752833 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.755899 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.255883167 +0000 UTC m=+142.751604089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.756053 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.794658 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.822311 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.824076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853311 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853766 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853786 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853854 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854185 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854345 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854580 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854789 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854869 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854939 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856782 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857895 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.858417 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.35839707 +0000 UTC m=+142.854117992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859180 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859276 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860319 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860372 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860404 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860496 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860709 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860834 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861286 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861835 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861894 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862078 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.863048 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.863541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864960 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.866530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.866887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.867017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.867513 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.894790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.894966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.895493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.895844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.898588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902567 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.903131 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.903226 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.919139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.919159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.925697 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.930144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.940704 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.959370 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6cef9b9_56ee_4d0a_8c13_651e3f649a0e.slice/crio-93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033 WatchSource:0}: Error finding container 93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033: Status 404 returned error can't find the container with id 93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963831 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963898 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964298 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964322 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964477 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964698 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964758 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.965683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.965750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.965983 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.465968614 +0000 UTC m=+142.961689536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966757 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.967408 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.967490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.968278 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.968360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.969019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.971140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.972518 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.973685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.974777 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.974959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.975751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.976088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.980258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.999703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.012219 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.021545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.043881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.045075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.064437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.066474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.066564 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.566542417 +0000 UTC m=+143.062263339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.067027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.068280 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.568270062 +0000 UTC m=+143.063990984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.079186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.095242 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.100028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.130060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"6a94ba1746bb9046411621744c6bc575cf53f8390a14bb5b831460a72bde647b"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132575 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerID="00784d510eb0a7114170d2f3527c5738b72eabb6feec6367c4900c0af18aeb52" exitCode=0 Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerDied","Data":"00784d510eb0a7114170d2f3527c5738b72eabb6feec6367c4900c0af18aeb52"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.137013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" event={"ID":"3440ceb6-cf9c-4732-bafb-8a58d419276a","Type":"ContainerStarted","Data":"9c4a15b6d2187e9d750901a73c94c3cb04a444f5d12747c31f942fb283c997a5"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.139956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerStarted","Data":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.139983 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerStarted","Data":"6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.143789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"f11807ff00d70727eedd73b8cfc97f26df2ef13d4d075612357c262e9f7e3a7b"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.143818 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"7f5f1179086b0a7de906fc48820274d8fe29f9e5fa08346a3858a1510789c397"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.144544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.145413 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"88f333bff0ef6dbf7f88e6a1ea8d79ef8fdf9114af426c2cffe30e5eddc12780"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.145432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.146230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" event={"ID":"537a1340-9cce-4d5b-9cff-35d934fc4d71","Type":"ContainerStarted","Data":"c2572353e5e0c823eeff3b1e32bc342cd7bbc8ae2f7590fb64fad5e83246bea1"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.147364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"e5bd52a0075af14b489c4570d608174633afa7b7a881b0dd3fcc09f4d546742f"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.147380 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"a8bc596c47e78bec4371bb8a6e511c0017cd3d84224a0fed49e43a4fd604f54f"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.148434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerStarted","Data":"8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.148463 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerStarted","Data":"fae6dc1b6a99284726a5c316e9b142133b64b76e06f03661a6baf4b3e9620752"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.149112 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.150227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerStarted","Data":"1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.167709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.168039 4739 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hkhdz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.168066 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.170124 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.670094288 +0000 UTC m=+143.165815220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.173656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.186196 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.188225 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerStarted","Data":"9798d65b85ae4e13b6c002346401f6bb1ef68d24b1a06667dcac8951b39cc2a0"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.193914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.195963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.209122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.217421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.220598 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.223212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.226592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.230877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.246162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.259846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.268195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.273863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.276375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.276523 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.277293 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.77726593 +0000 UTC m=+143.272986852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.280340 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.287191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.292910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.306328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.324288 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.324684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.339779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.366953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.369135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.377665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.378090 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.87805973 +0000 UTC m=+143.373780652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.379329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.399344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.404682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.409114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.415271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.416911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.423134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.430587 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.446173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.463420 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.481366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.481948 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.981936918 +0000 UTC m=+143.477657840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.482713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.489768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.533726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.541064 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.566696 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.583512 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.586737 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.086554785 +0000 UTC m=+143.582275717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.586919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.587303 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.087296134 +0000 UTC m=+143.583017056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.589772 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.634824 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.634864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.638297 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.683567 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.688501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.688772 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.18872059 +0000 UTC m=+143.684441512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.690173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.691749 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.191733057 +0000 UTC m=+143.687453979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.739326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.747648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.791327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.791714 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.291696225 +0000 UTC m=+143.787417157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.892347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.892631 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.392620158 +0000 UTC m=+143.888341080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.993729 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.994017 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.493996931 +0000 UTC m=+143.989717853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.995412 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.995845 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.495831548 +0000 UTC m=+143.991552460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.096560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.096915 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.596901414 +0000 UTC m=+144.092622336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.200226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.200807 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.700787333 +0000 UTC m=+144.196508305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.223894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" event={"ID":"3440ceb6-cf9c-4732-bafb-8a58d419276a","Type":"ContainerStarted","Data":"709aa7985ff2276c020f17fb6d2e08776b5e1af00e2c9079bc5748e13ce979f3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.230709 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerStarted","Data":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.237400 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.236088 4739 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lbspb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.237503 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.242365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" event={"ID":"537a1340-9cce-4d5b-9cff-35d934fc4d71","Type":"ContainerStarted","Data":"c580c90df571a7fb6bc9806588bd15723f39f8d7a44c9d061e736010db9ea57e"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.250239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"76ca13646a6f555b7a58af73d7be351c098394951382d84b9d889dde606395a3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.251115 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.281944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjgwd" event={"ID":"d004f5dd-a97b-4707-be47-cd5a9bb69c8a","Type":"ContainerStarted","Data":"97859bcc46030d2f87ef58bc80363d7933bc87323cf44110994346065252628b"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.289081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.300371 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.300843 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.302315 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.310409 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.802465165 +0000 UTC m=+144.298186087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.310635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.310931 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.810919972 +0000 UTC m=+144.306640894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.324094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"32ef41f0b4a7925cabbc8250513df4d06f7d5d181f6b27d2803e7483e7b4cf75"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.336471 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.336509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"0d679bf97ccb59f87700e07f0a788d9cfe9d2202bf473ab59b61116ca4b4adee"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.337218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.343859 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"5ede99a099f422ae08e5df96fa4980d3f1ba68a9678cd69c1a2957615f10e256"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"847dde122b625d1f909bbda96fe9090a22f609392abfc78c26ae880a24885532"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"ddb6ee868e6b584bf7a8a889579a1f94d4dcfeb1dbd2bba5c51811607eab1333"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350806 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350873 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.352301 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a738e9a-0692-4476-b9ba-930e3bdc34d2" containerID="603b086f5a20b396ca79d4fcf433b144e7214077cbc50414486f96674e7ab8c4" exitCode=0 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.352356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerDied","Data":"603b086f5a20b396ca79d4fcf433b144e7214077cbc50414486f96674e7ab8c4"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.365960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"489dd3026a2ab974817b6a4a3b7a46f35344594129a95c2983c87659ebaee3df"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.367027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" event={"ID":"9c1d88a8-7aa9-413f-81cc-5a4852b2691b","Type":"ContainerStarted","Data":"15d73e6bd39405a7a3ff8fe8df861177449cae8d826eea2924592223c7683055"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.369183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.373098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"c2b9ad86542f62b7253ed535eed8d5364f60faa03da19a9f47f405687aeda261"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.375876 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" event={"ID":"84627667-4128-47e5-a611-c650633e8362","Type":"ContainerStarted","Data":"9790ec144703857b9df6a328709790c3dbab5582dc3a53c477c5e2e2ad431e6c"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.375925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" event={"ID":"84627667-4128-47e5-a611-c650633e8362","Type":"ContainerStarted","Data":"7237d1072de04816ebd6193a3f49122a2c11b6abbee558bca1fe65ebd887a9f3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390457 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"c4ae30c0d54d4ef219b473e3da57997fe4557e4d0c833df91259e05007b1b050"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390505 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390852 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390916 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.391287 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.391324 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.403261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" event={"ID":"b4948709-692e-4ce2-b84a-55a87412856d","Type":"ContainerStarted","Data":"acbe2563d63342e05403b9dd1af03a77b36b37ebdca0810dd6223ac25e4c6b37"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.408888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.411685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.412741 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.912727007 +0000 UTC m=+144.408447919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.418856 4739 generic.go:334] "Generic (PLEG): container finished" podID="86f15b94-810d-4448-a663-fd8862f0e601" containerID="0293fd784194161be55c4a69f6d5bfe73ee070c34ef9ca3ab5c650f69fc6e283" exitCode=0 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.419102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerDied","Data":"0293fd784194161be55c4a69f6d5bfe73ee070c34ef9ca3ab5c650f69fc6e283"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.421866 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.423061 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c4w7p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.423106 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.482041 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.486622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.516264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.522675 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.022661651 +0000 UTC m=+144.518382653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.551977 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.602381 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd69695_49d3_46a8_9981_b592c44e827e.slice/crio-521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a WatchSource:0}: Error finding container 521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a: Status 404 returned error can't find the container with id 521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.618627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.621294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.121189842 +0000 UTC m=+144.616910764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.639384 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.649189 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.650204 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.661537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.669339 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.671671 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.678669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.694326 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.712201 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.719604 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:51 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:51 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:51 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.719658 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.720027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.720304 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.220292878 +0000 UTC m=+144.716013800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.744651 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45eb000e_b333_47b8_9cb5_d383ca0628dd.slice/crio-3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a WatchSource:0}: Error finding container 3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a: Status 404 returned error can't find the container with id 3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.755620 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d076be7_905d_48ba_a63c_1c87999890ba.slice/crio-09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd WatchSource:0}: Error finding container 09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd: Status 404 returned error can't find the container with id 09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.824858 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.825484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.32546872 +0000 UTC m=+144.821189642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.847500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.850500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.850541 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.861973 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.898661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.900566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.902834 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.902879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.907840 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27c3dde_4f78_49ec_8cc2_39c588d91f56.slice/crio-9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5 WatchSource:0}: Error finding container 9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5: Status 404 returned error can't find the container with id 9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.926453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.926789 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.426777692 +0000 UTC m=+144.922498614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.941148 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5873f31d_7486_489d_866f_9442195a86bf.slice/crio-b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17 WatchSource:0}: Error finding container b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17: Status 404 returned error can't find the container with id b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.953825 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" podStartSLOduration=118.953807806 podStartE2EDuration="1m58.953807806s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:51.953783956 +0000 UTC m=+144.449504878" watchObservedRunningTime="2026-02-18 14:01:51.953807806 +0000 UTC m=+144.449528728" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.004561 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5cdhr" podStartSLOduration=118.00454415 podStartE2EDuration="1m58.00454415s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.003350689 +0000 UTC m=+144.499071611" watchObservedRunningTime="2026-02-18 14:01:52.00454415 +0000 UTC m=+144.500265072" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.028307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.028516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.528490435 +0000 UTC m=+145.024211357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.028548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.029416 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.529403258 +0000 UTC m=+145.025124180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.039398 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" podStartSLOduration=118.039377744 podStartE2EDuration="1m58.039377744s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.037082306 +0000 UTC m=+144.532803238" watchObservedRunningTime="2026-02-18 14:01:52.039377744 +0000 UTC m=+144.535098656" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.085985 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podStartSLOduration=118.085966041 podStartE2EDuration="1m58.085966041s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.078379386 +0000 UTC m=+144.574100308" watchObservedRunningTime="2026-02-18 14:01:52.085966041 +0000 UTC m=+144.581686963" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.129281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.130018 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.629996402 +0000 UTC m=+145.125717324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.204670 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podStartSLOduration=118.20465647 podStartE2EDuration="1m58.20465647s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.161898462 +0000 UTC m=+144.657619384" watchObservedRunningTime="2026-02-18 14:01:52.20465647 +0000 UTC m=+144.700377392" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231450 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podStartSLOduration=118.231424868 podStartE2EDuration="1m58.231424868s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.205708247 +0000 UTC m=+144.701429169" watchObservedRunningTime="2026-02-18 14:01:52.231424868 +0000 UTC m=+144.727145790" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231949 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podStartSLOduration=118.231945661 podStartE2EDuration="1m58.231945661s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.230835573 +0000 UTC m=+144.726556495" watchObservedRunningTime="2026-02-18 14:01:52.231945661 +0000 UTC m=+144.727666583" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.231984 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.731968882 +0000 UTC m=+145.227689804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.315521 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" podStartSLOduration=118.315501808 podStartE2EDuration="1m58.315501808s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.315039646 +0000 UTC m=+144.810760578" watchObservedRunningTime="2026-02-18 14:01:52.315501808 +0000 UTC m=+144.811222730" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.316549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" podStartSLOduration=118.316539824 podStartE2EDuration="1m58.316539824s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.279421941 +0000 UTC m=+144.775142873" watchObservedRunningTime="2026-02-18 14:01:52.316539824 +0000 UTC m=+144.812260766" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.332189 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.336397 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.832425212 +0000 UTC m=+145.328146144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.336649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.337070 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.837054091 +0000 UTC m=+145.332775013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.397037 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podStartSLOduration=118.397021392 podStartE2EDuration="1m58.397021392s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.352351824 +0000 UTC m=+144.848072766" watchObservedRunningTime="2026-02-18 14:01:52.397021392 +0000 UTC m=+144.892742314" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.431924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" event={"ID":"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da","Type":"ContainerStarted","Data":"a9dc6a5a76b00c50706e5e8be6140d9f1bbc9b5ab63de7523b7e90764fa60739"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.433270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerStarted","Data":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.433294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerStarted","Data":"521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.436142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"780fce724889e3cb5d1d13acc16e16877db685558fd22ae9b142dc12aea4188c"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.436762 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podStartSLOduration=118.436751802 podStartE2EDuration="1m58.436751802s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.435210013 +0000 UTC m=+144.930930935" watchObservedRunningTime="2026-02-18 14:01:52.436751802 +0000 UTC m=+144.932472724" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.438078 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.438395 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.938384664 +0000 UTC m=+145.434105586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.443583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rtb8n" event={"ID":"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d","Type":"ContainerStarted","Data":"76a79069d52c8f8cf823038205c3af57a6bc33e4cafbfb519dad10e4bb7c590b"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.443621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rtb8n" event={"ID":"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d","Type":"ContainerStarted","Data":"bc0a14a3686e361498dc238b3050070dd4c8dcf0b3d9dd6f2ff6ffcab89ad1ac"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.444461 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.445772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerStarted","Data":"24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.448016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerStarted","Data":"72bd5bb0249ac0bfae0cd92c5ca1379c10de09155ffd4a1cd5651649a5f7f819"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.451016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbnbw" event={"ID":"bb6d5402-0976-4291-b4ee-5c481fd8df72","Type":"ContainerStarted","Data":"6da9def75b21ba4d5b8be0d36f038b06d324a82056ff5ab96ebeafb36d90715b"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" event={"ID":"1af3a272-dd2c-446d-9ac3-7a2c380c34c8","Type":"ContainerStarted","Data":"4b6b8cd6c72ad920711875a76ba3d43d536145610437b3064ac664b8e7e6e7a9"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455388 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" event={"ID":"1af3a272-dd2c-446d-9ac3-7a2c380c34c8","Type":"ContainerStarted","Data":"f0a115ccfb7a2db55613a41b98d76463498f836890263e847766929f500d65b4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455431 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459861 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459903 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.460548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.464140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"6f339289c07bbc22f01501f11fa2db49998435cdbbd47b4616fcca0ec4213610"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.467020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" event={"ID":"ffd4b935-0435-4a73-a7cd-596856c63f84","Type":"ContainerStarted","Data":"b918f952fcd24aa6e35d78a9dae641db97a6f91032b8ac283e78c8b1d09bb523"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.467049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" event={"ID":"ffd4b935-0435-4a73-a7cd-596856c63f84","Type":"ContainerStarted","Data":"43c19ceb1da81f8e21da41ae36950d98049c8c27b28b1c652b8c5936c46fcc24"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.468239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"b8e43f9cb25a4bf00bf6a5f4f07101efbaa32a2c7f2003d1beda2735f37d5e23"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.470683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerStarted","Data":"2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.470711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerStarted","Data":"39ed9908fc06adc6beaf03f5a0f7a7f9cb74f347fecb397c807b3e8019f3cdd9"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.471020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.473555 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-64j2j container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.473588 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474183 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" podStartSLOduration=118.474158183 podStartE2EDuration="1m58.474158183s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.469886553 +0000 UTC m=+144.965607475" watchObservedRunningTime="2026-02-18 14:01:52.474158183 +0000 UTC m=+144.969879115" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" event={"ID":"45eb000e-b333-47b8-9cb5-d383ca0628dd","Type":"ContainerStarted","Data":"dac7852d0d18f8e9f0f185b76bd1542a5beda377626a6f61607a0797c0fdf1d4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" event={"ID":"45eb000e-b333-47b8-9cb5-d383ca0628dd","Type":"ContainerStarted","Data":"3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.480460 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" event={"ID":"8d076be7-905d-48ba-a63c-1c87999890ba","Type":"ContainerStarted","Data":"09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.482940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"ebd678427637d2d33b7e2608fe1da8a385d7e4a9549ca54971083d9ff0db99f6"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.483924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" event={"ID":"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be","Type":"ContainerStarted","Data":"e25f2428cec7a470befa13bd19f47f8ffcb8c05d35ac6f33704688d06921be9a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.490487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.491520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.491545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"2ab479b392cde15a02159889aef023d8858fc9ecbff4659d1f2680779fd37752"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.493230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"4787e51956ce23d3ac2da1265db039a01e4c24c45a372c3dfbceb153d7f0cb94"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.493252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"bc0ca75a411d408a5eef0a9be021e6ec6ddfe044ce205158e181cec58a1cb55a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.513532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"71b6e14b271f5585f0284dd41e5c3e8015f2d67a74c41b5d18b9e94e373567cf"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.516537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"521a422bc1cfb9e5f3bf56987c01fdfaac33848c737e116e528a48af944e975a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" event={"ID":"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2","Type":"ContainerStarted","Data":"43a442238ea052e2c884c898a38ebff90ea3de18bedf387b1ec96c36fc6e942a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" event={"ID":"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2","Type":"ContainerStarted","Data":"21dd952a4adff37ba4d7fe8578ee2fe8fc346eac47f87176e72cafe003f400d5"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518360 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" podStartSLOduration=118.51835053799999 podStartE2EDuration="1m58.518350538s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.516681416 +0000 UTC m=+145.012402338" watchObservedRunningTime="2026-02-18 14:01:52.518350538 +0000 UTC m=+145.014071450" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.533921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" event={"ID":"9c1d88a8-7aa9-413f-81cc-5a4852b2691b","Type":"ContainerStarted","Data":"b9cc6ff5892682dda1a1d2876a6134b5a1006ba7276bacb929c849056d68e891"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.535296 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.535347 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.537930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" event={"ID":"b4948709-692e-4ce2-b84a-55a87412856d","Type":"ContainerStarted","Data":"a98b964a7a70ab8a57d716a80b47087dc112e5b88b69c67664d9c32ea469d7fe"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.549304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"2c4a7eb068be71106726e9e23aea90de36bfb4b6a0e0bded3667395897654c3d"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.549339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"28c99292842f2224376e77794a5bb086114fc269ef0fe2189cb969191993b23d"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.550847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjgwd" event={"ID":"d004f5dd-a97b-4707-be47-cd5a9bb69c8a","Type":"ContainerStarted","Data":"f8ecfd184fdb8ef0b9ae52832ef376e99e1faa089e999d350bcddbc1f4b5ee38"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.551287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.551543 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.051531761 +0000 UTC m=+145.547252683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.562661 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.580248 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.562341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"e7ea5d3f2ab9c1840a87fa286e1d46b0d0c23b0e0bfb037d47385cbe78e55901"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.582489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.582602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.591229 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podStartSLOduration=118.5912119 podStartE2EDuration="1m58.5912119s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.590111252 +0000 UTC m=+145.085832184" watchObservedRunningTime="2026-02-18 14:01:52.5912119 +0000 UTC m=+145.086932822" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.591330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" podStartSLOduration=118.591324073 podStartE2EDuration="1m58.591324073s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.558609893 +0000 UTC m=+145.054330815" watchObservedRunningTime="2026-02-18 14:01:52.591324073 +0000 UTC m=+145.087044995" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.634788 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" podStartSLOduration=119.634773969 podStartE2EDuration="1m59.634773969s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.634348358 +0000 UTC m=+145.130069290" watchObservedRunningTime="2026-02-18 14:01:52.634773969 +0000 UTC m=+145.130494881" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.666912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.667101 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.167073729 +0000 UTC m=+145.662794651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.667601 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.670779 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.170768164 +0000 UTC m=+145.666489086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.676525 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" podStartSLOduration=118.676510341 podStartE2EDuration="1m58.676510341s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.67608348 +0000 UTC m=+145.171804422" watchObservedRunningTime="2026-02-18 14:01:52.676510341 +0000 UTC m=+145.172231263" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.711643 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:52 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:52 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:52 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.711719 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.733140 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" podStartSLOduration=118.733126396 podStartE2EDuration="1m58.733126396s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.731409831 +0000 UTC m=+145.227130763" watchObservedRunningTime="2026-02-18 14:01:52.733126396 +0000 UTC m=+145.228847318" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.773049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.773471 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.273433061 +0000 UTC m=+145.769153993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.815378 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" podStartSLOduration=118.815357878 podStartE2EDuration="1m58.815357878s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.809711783 +0000 UTC m=+145.305432715" watchObservedRunningTime="2026-02-18 14:01:52.815357878 +0000 UTC m=+145.311078800" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.867005 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podStartSLOduration=119.866990424 podStartE2EDuration="1m59.866990424s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.865594728 +0000 UTC m=+145.361315660" watchObservedRunningTime="2026-02-18 14:01:52.866990424 +0000 UTC m=+145.362711346" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.881676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.881961 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.381949929 +0000 UTC m=+145.877670851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.894864 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podStartSLOduration=119.89484787 podStartE2EDuration="1m59.89484787s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.894555132 +0000 UTC m=+145.390276074" watchObservedRunningTime="2026-02-18 14:01:52.89484787 +0000 UTC m=+145.390568792" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.925881 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rtb8n" podStartSLOduration=118.925861666 podStartE2EDuration="1m58.925861666s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.923764713 +0000 UTC m=+145.419485655" watchObservedRunningTime="2026-02-18 14:01:52.925861666 +0000 UTC m=+145.421582588" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.957105 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fjgwd" podStartSLOduration=5.957085469 podStartE2EDuration="5.957085469s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.956244997 +0000 UTC m=+145.451965919" watchObservedRunningTime="2026-02-18 14:01:52.957085469 +0000 UTC m=+145.452806391" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.970664 4739 csr.go:261] certificate signing request csr-l66mr is approved, waiting to be issued Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.982102 4739 csr.go:257] certificate signing request csr-l66mr is issued Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.983231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.983401 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.483383034 +0000 UTC m=+145.979103956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.983517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.984786 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.48477247 +0000 UTC m=+145.980493392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:52.998148 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r2dqq" podStartSLOduration=118.998132523 podStartE2EDuration="1m58.998132523s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.997411444 +0000 UTC m=+145.493132376" watchObservedRunningTime="2026-02-18 14:01:52.998132523 +0000 UTC m=+145.493853445" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.034920 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" podStartSLOduration=119.034906168 podStartE2EDuration="1m59.034906168s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.032758072 +0000 UTC m=+145.528478994" watchObservedRunningTime="2026-02-18 14:01:53.034906168 +0000 UTC m=+145.530627090" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.084784 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.084981 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.584943543 +0000 UTC m=+146.080664465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.085248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.085622 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.58560813 +0000 UTC m=+146.081329042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.119101 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" podStartSLOduration=119.11908445 podStartE2EDuration="1m59.11908445s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.1167482 +0000 UTC m=+145.612469122" watchObservedRunningTime="2026-02-18 14:01:53.11908445 +0000 UTC m=+145.614805372" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.186048 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.186546 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.686532063 +0000 UTC m=+146.182252975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.288362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.288663 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.788652856 +0000 UTC m=+146.284373778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.390011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.390292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.890276996 +0000 UTC m=+146.385997918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.491149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.491535 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.991515427 +0000 UTC m=+146.487236349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.603348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.603561 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.103532054 +0000 UTC m=+146.599252976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.603610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.603948 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.103937265 +0000 UTC m=+146.599658187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.630523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" event={"ID":"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be","Type":"ContainerStarted","Data":"236074c57c1102d1c9abb448d5248efa5d814c7eb4ba1abf09e30f256385de74"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.656037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" event={"ID":"8d076be7-905d-48ba-a63c-1c87999890ba","Type":"ContainerStarted","Data":"5f7d692089627c673ae15d03817b92bbfac4bd1b0f33613f15e2635a67ce9b44"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.658993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" podStartSLOduration=119.658979699 podStartE2EDuration="1m59.658979699s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.65591672 +0000 UTC m=+146.151637642" watchObservedRunningTime="2026-02-18 14:01:53.658979699 +0000 UTC m=+146.154700621" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.661618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"085475ec07d7fa9f0df964f771f0a29197ed45b320566c3fca64c98a15993e48"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.676414 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" podStartSLOduration=119.676401036 podStartE2EDuration="1m59.676401036s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.675864512 +0000 UTC m=+146.171585434" watchObservedRunningTime="2026-02-18 14:01:53.676401036 +0000 UTC m=+146.172121958" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"436c1454dffbbf07d77daecfc79adac7f01e41c2c69d09ea3732be1117989b0f"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"b581b1967dfe3011fb0142ca970c8aa6f293934d819ad7ae90bd7f0f329f20ba"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684579 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.701921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" event={"ID":"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da","Type":"ContainerStarted","Data":"9bcdae9d8da576faeba2dbeedcd179768a4de4037989feaa9468135c4583c084"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.705052 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.705202 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.205180746 +0000 UTC m=+146.700901668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.705364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.706496 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.206480479 +0000 UTC m=+146.702201471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.714898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podStartSLOduration=119.714876205 podStartE2EDuration="1m59.714876205s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.713302504 +0000 UTC m=+146.209023436" watchObservedRunningTime="2026-02-18 14:01:53.714876205 +0000 UTC m=+146.210597127" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.717755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"89e3a99af5e463d1e2ed508dea4255c9770adaf66fe00de05793d97f6d850de9"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.724609 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:53 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:53 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:53 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.724660 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.748654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"e27f64741c6acd6ffe70a9a8036fbba883fba5c130a13afc1987261c072ab5e3"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.775715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerStarted","Data":"74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.788902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbnbw" event={"ID":"bb6d5402-0976-4291-b4ee-5c481fd8df72","Type":"ContainerStarted","Data":"11971e4803fd6e1fb0cc9035d716f350af33d4fe02a82656ae665b1515d55e92"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.802402 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" podStartSLOduration=119.802386923 podStartE2EDuration="1m59.802386923s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.748604081 +0000 UTC m=+146.244324993" watchObservedRunningTime="2026-02-18 14:01:53.802386923 +0000 UTC m=+146.298107845" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.802567 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podStartSLOduration=120.802561997 podStartE2EDuration="2m0.802561997s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.793854543 +0000 UTC m=+146.289575465" watchObservedRunningTime="2026-02-18 14:01:53.802561997 +0000 UTC m=+146.298282919" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.808159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.808916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809697 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809966 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.810934 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.310918392 +0000 UTC m=+146.806639314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.828504 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"2edc3da1889dd5aef1e8d5662a5a7ae98ca072a3efde529c3e5df626c076a934"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.828828 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" podStartSLOduration=113.828808371 podStartE2EDuration="1m53.828808371s" podCreationTimestamp="2026-02-18 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.827955619 +0000 UTC m=+146.323676551" watchObservedRunningTime="2026-02-18 14:01:53.828808371 +0000 UTC m=+146.324529313" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.840901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"e588d0a06b60f78a085a5f6c34deecdfe8576f05850c14066206904d239c0286"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.840953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"e49bf89bbe45503ba9159ab5b3ca7e0e669fcd46c09ef23518e59e78c735c005"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844771 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844816 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844986 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-64j2j container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.845027 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.860740 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.877724 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fbnbw" podStartSLOduration=6.877624345 podStartE2EDuration="6.877624345s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.862991809 +0000 UTC m=+146.358712731" watchObservedRunningTime="2026-02-18 14:01:53.877624345 +0000 UTC m=+146.373345267" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.887908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.912688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.919854 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.419830249 +0000 UTC m=+146.915551171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.947296 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podStartSLOduration=119.947277644 podStartE2EDuration="1m59.947277644s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.946915585 +0000 UTC m=+146.442636507" watchObservedRunningTime="2026-02-18 14:01:53.947277644 +0000 UTC m=+146.442998576" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.983592 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 13:56:52 +0000 UTC, rotation deadline is 2027-01-09 11:23:36.29556401 +0000 UTC Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.983629 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7797h21m42.311937732s for next certificate rotation Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.999336 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" podStartSLOduration=119.999304931 podStartE2EDuration="1m59.999304931s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.994785825 +0000 UTC m=+146.490506767" watchObservedRunningTime="2026-02-18 14:01:53.999304931 +0000 UTC m=+146.495025853" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.016136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.016586 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.516572105 +0000 UTC m=+147.012293017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.064007 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" podStartSLOduration=120.063988713 podStartE2EDuration="2m0.063988713s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.062536065 +0000 UTC m=+146.558256997" watchObservedRunningTime="2026-02-18 14:01:54.063988713 +0000 UTC m=+146.559709635" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.081742 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.082062 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.085523 4739 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-44mk7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.085566 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" podUID="7a738e9a-0692-4476-b9ba-930e3bdc34d2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.086542 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.087018 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.090204 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.090265 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.118033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.118387 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.61837138 +0000 UTC m=+147.114092302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.139242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" podStartSLOduration=120.139225205 podStartE2EDuration="2m0.139225205s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.08998399 +0000 UTC m=+146.585704922" watchObservedRunningTime="2026-02-18 14:01:54.139225205 +0000 UTC m=+146.634946127" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.219425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.219547 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.719526138 +0000 UTC m=+147.215247060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.219689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.219988 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.71998118 +0000 UTC m=+147.215702102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.320825 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.320929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.820909892 +0000 UTC m=+147.316630814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.321125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.321427 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.821418785 +0000 UTC m=+147.317139707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.421796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.422003 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.921987589 +0000 UTC m=+147.417708511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.422258 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.422756 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.922746138 +0000 UTC m=+147.418467130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.523102 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.523305 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.02327378 +0000 UTC m=+147.518994722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.523452 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.523753 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.023743672 +0000 UTC m=+147.519464664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.625084 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.625259 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.125226929 +0000 UTC m=+147.620947851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.625503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.625801 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.125789703 +0000 UTC m=+147.621510625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.702887 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:54 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:54 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:54 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.702945 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.727109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.727251 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.227230349 +0000 UTC m=+147.722951271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.727369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.727660 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.22765122 +0000 UTC m=+147.723372142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.828599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.828730 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.328704846 +0000 UTC m=+147.824425768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.828859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.829196 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.329188058 +0000 UTC m=+147.824908980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"e4408e01ea6d3ed572094cc716c58a6a0cc397dc7f4837e9f8f0dbaa68c4831b"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851101 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851528 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851568 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.852306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"53a1b946ba4020ebc2169fb6292f920459c8cfb91c458a68a9eab9872915bb7a"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.853332 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"4e9aec60e2ded8d58f4b7f571605f78d408914430f9de6c52b1fdf3d3d4230e2"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.854792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"eb4792e25d8fa18f949a84af22e25b6fe8c8cef0f70ec20c26397ec7c08480fa"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855309 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855343 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855683 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855722 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.880948 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8lgk6" podStartSLOduration=7.8809309469999995 podStartE2EDuration="7.880930947s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.88027907 +0000 UTC m=+147.376000002" watchObservedRunningTime="2026-02-18 14:01:54.880930947 +0000 UTC m=+147.376651869" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.930198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.930353 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.430329076 +0000 UTC m=+147.926049998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.930515 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.933893 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.433876027 +0000 UTC m=+147.929597029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.951643 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" podStartSLOduration=120.951622543 podStartE2EDuration="2m0.951622543s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.926832846 +0000 UTC m=+147.422553778" watchObservedRunningTime="2026-02-18 14:01:54.951622543 +0000 UTC m=+147.447343465" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.987021 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" podStartSLOduration=120.986988272 podStartE2EDuration="2m0.986988272s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.986124619 +0000 UTC m=+147.481845541" watchObservedRunningTime="2026-02-18 14:01:54.986988272 +0000 UTC m=+147.482709184" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.031914 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.032054 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.532026789 +0000 UTC m=+148.027747711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.032163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.032461 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.532436609 +0000 UTC m=+148.028157531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.132912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.133483 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.633467784 +0000 UTC m=+148.129188706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.173221 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.234925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.235998 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.735987418 +0000 UTC m=+148.231708340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.273669 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.274556 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.280021 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.334809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336547 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.336738 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.836724195 +0000 UTC m=+148.332445117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.437955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438158 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.439018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.439283 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.93927163 +0000 UTC m=+148.434992552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.439517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.476954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.483436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.484541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.489710 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.500116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539715 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539907 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.540046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.540143 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.040129951 +0000 UTC m=+148.535850873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.599124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.642584 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.642863 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.142850989 +0000 UTC m=+148.638571911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.643251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.684122 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.685323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.691797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.711369 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:55 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:55 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:55 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.711410 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.720639 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742741 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742782 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.742896 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.242881609 +0000 UTC m=+148.738602531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.805826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.846064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.846275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.846369 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.346360167 +0000 UTC m=+148.842081089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.909379 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.910262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.921994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"ec8c434a941a3aead3ccdc2c7c54080621be7500a89ecbfc3709582eb8f12b43"} Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.945954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.946099 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.446076428 +0000 UTC m=+148.941797350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.946977 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.446961571 +0000 UTC m=+148.942682493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.976652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.009375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.021901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.047884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048298 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.049112 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.549096495 +0000 UTC m=+149.044817417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.050887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.053535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.137332 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.152251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.152663 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.652649035 +0000 UTC m=+149.148369957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.230055 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.255378 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.755349043 +0000 UTC m=+149.251069965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255541 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.257524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.262538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.307038 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.356989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.357074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.357104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.362358 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.862340202 +0000 UTC m=+149.358061124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.370256 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.371565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.373425 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.439707 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460516 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.468900 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.968873498 +0000 UTC m=+149.464594420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.562382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.563084 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.063072458 +0000 UTC m=+149.558793380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.664113 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.665325 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.165307644 +0000 UTC m=+149.661028566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.707941 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:56 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:56 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:56 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.707990 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.723840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.766477 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.766764 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.26675255 +0000 UTC m=+149.762473472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.872128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.872491 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.372476286 +0000 UTC m=+149.868197198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.898134 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.946592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerStarted","Data":"1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.947647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerStarted","Data":"e5127c0ff7f429af7d0aca6c5c08ea2c05b6bea576e6c38224ce6837bef827fc"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.949100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"5a62d87be5fc2476bd7663a4bf5cea4de5e2b35ec2e2fe46d8b36981ea800819"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.951084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerStarted","Data":"59dbe1e3611ef825eb60e8c102d83aabfcf6d0ed72189d4427096a9698a93bb3"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.976092 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.976463 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.476428316 +0000 UTC m=+149.972149238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.051388 4739 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.077030 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.077252 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.577230506 +0000 UTC m=+150.072951438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.077532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.077864 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.577855612 +0000 UTC m=+150.073576544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.101500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:57 crc kubenswrapper[4739]: W0218 14:01:57.130475 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28c2c6dd_c0bb_4e02_8ec9_53b9616e1bf2.slice/crio-81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd WatchSource:0}: Error finding container 81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd: Status 404 returned error can't find the container with id 81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.179040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.179346 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.679329538 +0000 UTC m=+150.175050460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.280807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.281187 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.781171714 +0000 UTC m=+150.276892636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.381518 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.381727 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.881699417 +0000 UTC m=+150.377420339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.382106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.382539 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.882519698 +0000 UTC m=+150.378240700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.483496 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.483688 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.983653056 +0000 UTC m=+150.479374018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.483960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.484292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.984277342 +0000 UTC m=+150.479998324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.584583 4739 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T14:01:57.051418113Z","Handler":null,"Name":""} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.584947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.585335 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:58.085319737 +0000 UTC m=+150.581040659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.588058 4739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.588103 4739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.672011 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.672980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.674843 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.685488 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.686034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.704032 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:57 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:57 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.704384 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.774065 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.774109 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.814725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.818293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889470 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.890092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.890642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.910969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.915367 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965122 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965320 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerStarted","Data":"81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.967050 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968004 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8da6c0deaae0a27d49185a7f50e5f502f2ddf6d0698cd86cad40a5e6540e0378"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b0c3fa0f0da5808ea89cc75a0163b69cb17d1b688e3974e71e9747e5134f851"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968253 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.970972 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.971022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.979953 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.980095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.989698 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.993910 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.993989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.001572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9a1aa940676c1fe86ed10576072f18f096d597d3a0f3ef9cf86f4973b5b08f8f"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.001604 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cea8c372be0df8247e972ed465c79036115cba0a5d76071a77b952c15a262844"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.003020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"755128bbca4d4db9e21eeaf033ab801a571edabac2fd8b18b9aba579152986dd"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.003080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e46cb7a7c1d8bb18f756f48deec52b33ec495dfac9df36980e5e58aa5f7d6301"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.027562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"96ae3fcb8a5bdbeca9bba7e6dc545b0aae0b9cd422530b28a73190bdfb3ff8b1"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.074171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.081775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.093215 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.109821 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.197932 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.199021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.199073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.200008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" podStartSLOduration=11.199989996 podStartE2EDuration="11.199989996s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:58.175818935 +0000 UTC m=+150.671539857" watchObservedRunningTime="2026-02-18 14:01:58.199989996 +0000 UTC m=+150.695710918" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.294182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:58 crc kubenswrapper[4739]: W0218 14:01:58.298561 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6955631f_9981_47a5_8ecb_8756df4e0256.slice/crio-10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40 WatchSource:0}: Error finding container 10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40: Status 404 returned error can't find the container with id 10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40 Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.300032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.300196 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.301065 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.303501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.303555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.318059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.404523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.427677 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.473463 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.474680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.476602 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.477112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505228 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.607907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.607954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.627405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.662956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.671232 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:58 crc kubenswrapper[4739]: W0218 14:01:58.672319 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef7eb68_c7a7_448e_bbbc_10798fabc4e6.slice/crio-96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7 WatchSource:0}: Error finding container 96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7: Status 404 returned error can't find the container with id 96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7 Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.672454 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.680494 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.703841 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:58 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:58 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:58 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.704087 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.708882 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.708985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.709007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.811006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.811157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.833744 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.857039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.979274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.980142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.982995 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.985749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.990600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.013648 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.013699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.035548 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.043489 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerID="74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.043576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerDied","Data":"74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048204 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerStarted","Data":"10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerStarted","Data":"c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerStarted","Data":"b96e22f2e4072131e39645eec1bdeb575f2e322af330e9ccff4e59c7655f9d27"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050817 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.056068 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.057170 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.057207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerStarted","Data":"96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.097089 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" podStartSLOduration=125.09706732 podStartE2EDuration="2m5.09706732s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:59.092831061 +0000 UTC m=+151.588551993" watchObservedRunningTime="2026-02-18 14:01:59.09706732 +0000 UTC m=+151.592788252" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.104175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.105654 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.108777 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.111128 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.114633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.114707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.115208 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.149357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.303735 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.307718 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.373043 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.373393 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.604532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:59 crc kubenswrapper[4739]: W0218 14:01:59.630050 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7772552e_1443_4f54_a50c_a73f55863363.slice/crio-b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56 WatchSource:0}: Error finding container b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56: Status 404 returned error can't find the container with id b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.689890 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:59 crc kubenswrapper[4739]: W0218 14:01:59.691940 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3069f8d4_4c22_4d3e_8d00_b08abfc1ca7a.slice/crio-7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b WatchSource:0}: Error finding container 7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b: Status 404 returned error can't find the container with id 7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.700024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.702395 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:59 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:59 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:59 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.702437 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.830311 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.078461 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" exitCode=0 Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.078585 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.079037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"ec2d2f157f528c4b55bc8096e827bd5672ec6bdfb957669781807b88427d0279"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.083978 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" exitCode=0 Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.084035 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.084059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.090786 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerStarted","Data":"7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.210977 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.211041 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.210985 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.211344 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.224709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.296544 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.298312 4739 patch_prober.go:28] interesting pod/console-f9d7485db-r2dqq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.298352 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r2dqq" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.299726 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.464334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.648112 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.655167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz" (OuterVolumeSpecName: "kube-api-access-bdqxz") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "kube-api-access-bdqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.672139 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.723144 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.726734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750118 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750144 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750154 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.113893 4739 generic.go:334] "Generic (PLEG): container finished" podID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerID="cdbff388006ce73bd61ca7ba3e30d7b284e66ccc9d3af37c29cecae01a6214aa" exitCode=0 Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.113949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerDied","Data":"cdbff388006ce73bd61ca7ba3e30d7b284e66ccc9d3af37c29cecae01a6214aa"} Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124589 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerDied","Data":"24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4"} Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124634 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.375863 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:02 crc kubenswrapper[4739]: E0218 14:02:02.376431 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.376462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.376592 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.377039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.380376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.385684 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.390869 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.499993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.500028 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.534237 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.658079 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703508 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" (UID: "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703970 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.708438 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" (UID: "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.804914 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerDied","Data":"7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b"} Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155876 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155935 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.215016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:03 crc kubenswrapper[4739]: W0218 14:02:03.226989 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod962751cd_ff1a_4e95_8027_aebe728486cd.slice/crio-660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae WatchSource:0}: Error finding container 660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae: Status 404 returned error can't find the container with id 660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae Feb 18 14:02:04 crc kubenswrapper[4739]: I0218 14:02:04.161064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerStarted","Data":"660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae"} Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.188313 4739 generic.go:334] "Generic (PLEG): container finished" podID="962751cd-ff1a-4e95-8027-aebe728486cd" containerID="b8f2018a5a199accc20294c64dc0aa16c4653a6e7a1587e33d27ea34f1e7df2f" exitCode=0 Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.188356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerDied","Data":"b8f2018a5a199accc20294c64dc0aa16c4653a6e7a1587e33d27ea34f1e7df2f"} Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.508804 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.501887 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"962751cd-ff1a-4e95-8027-aebe728486cd\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671833 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"962751cd-ff1a-4e95-8027-aebe728486cd\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "962751cd-ff1a-4e95-8027-aebe728486cd" (UID: "962751cd-ff1a-4e95-8027-aebe728486cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.672138 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.687313 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "962751cd-ff1a-4e95-8027-aebe728486cd" (UID: "962751cd-ff1a-4e95-8027-aebe728486cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.773040 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerDied","Data":"660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae"} Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208346 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae" Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208358 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.214872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.317438 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.321886 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.723267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.733492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.923843 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:17 crc kubenswrapper[4739]: I0218 14:02:17.825372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.568597 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.569249 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qt9pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ccnsw_openshift-marketplace(7772552e-1443-4f54-a50c-a73f55863363): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.572716 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.618408 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.618585 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gg4gr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-47vjm_openshift-marketplace(a44b0172-9ef1-4181-8380-bfe703bdc50d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.619701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" Feb 18 14:02:28 crc kubenswrapper[4739]: I0218 14:02:28.690613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.700618 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.700760 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r242g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n8kkn_openshift-marketplace(7ce55882-0feb-4edb-99df-9df2dcb6e62e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.701973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.341793 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.342381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.344678 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.344740 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.346997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"f194c07096de388bc3341863c0856a96bdf670c60f9d57b7eb4f4b94ac43a7d0"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.347051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"03a828a3b77017391b65e8d41e2dccc5854cffa517314ec856ea8317072d18a8"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.347063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"6ee88ab257606ffd317d7e44ee6b70d65dd1f0ac0630eb23bdee3082d8d2ad30"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.351628 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.351677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.354423 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.354502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.366419 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368193 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368421 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.373026 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.373069 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.445026 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nhkmm" podStartSLOduration=155.445003911 podStartE2EDuration="2m35.445003911s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:29.384843835 +0000 UTC m=+181.880564767" watchObservedRunningTime="2026-02-18 14:02:29.445003911 +0000 UTC m=+181.940724833" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.376524 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" exitCode=0 Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.376592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.387175 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerStarted","Data":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.392574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerStarted","Data":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.421771 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wznkg" podStartSLOduration=2.317480171 podStartE2EDuration="33.42175384s" podCreationTimestamp="2026-02-18 14:01:57 +0000 UTC" firstStartedPulling="2026-02-18 14:01:59.056027366 +0000 UTC m=+151.551748288" lastFinishedPulling="2026-02-18 14:02:30.160301035 +0000 UTC m=+182.656021957" observedRunningTime="2026-02-18 14:02:30.419999135 +0000 UTC m=+182.915720067" watchObservedRunningTime="2026-02-18 14:02:30.42175384 +0000 UTC m=+182.917474762" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.432524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.445601 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ch5b" podStartSLOduration=3.333328062 podStartE2EDuration="35.445584403s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:58.015392235 +0000 UTC m=+150.511113157" lastFinishedPulling="2026-02-18 14:02:30.127648576 +0000 UTC m=+182.623369498" observedRunningTime="2026-02-18 14:02:30.441551749 +0000 UTC m=+182.937272671" watchObservedRunningTime="2026-02-18 14:02:30.445584403 +0000 UTC m=+182.941305325" Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.400114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerStarted","Data":"d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.402612 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerStarted","Data":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.406536 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.421752 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fst2x" podStartSLOduration=2.220334294 podStartE2EDuration="33.421733378s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:01:59.063663942 +0000 UTC m=+151.559384864" lastFinishedPulling="2026-02-18 14:02:30.265063026 +0000 UTC m=+182.760783948" observedRunningTime="2026-02-18 14:02:31.420707332 +0000 UTC m=+183.916428254" watchObservedRunningTime="2026-02-18 14:02:31.421733378 +0000 UTC m=+183.917454300" Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.441624 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5j8b" podStartSLOduration=4.036945127 podStartE2EDuration="36.441608718s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.966830548 +0000 UTC m=+150.462551470" lastFinishedPulling="2026-02-18 14:02:30.371494139 +0000 UTC m=+182.867215061" observedRunningTime="2026-02-18 14:02:31.440292735 +0000 UTC m=+183.936013667" watchObservedRunningTime="2026-02-18 14:02:31.441608718 +0000 UTC m=+183.937329640" Feb 18 14:02:35 crc kubenswrapper[4739]: I0218 14:02:35.600872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:35 crc kubenswrapper[4739]: I0218 14:02:35.601292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.064948 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.080591 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm56z" podStartSLOduration=7.372635346 podStartE2EDuration="38.080574923s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:02:00.080410281 +0000 UTC m=+152.576131203" lastFinishedPulling="2026-02-18 14:02:30.788349858 +0000 UTC m=+183.284070780" observedRunningTime="2026-02-18 14:02:31.461988742 +0000 UTC m=+183.957709664" watchObservedRunningTime="2026-02-18 14:02:36.080574923 +0000 UTC m=+188.576295845" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.231658 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.231714 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.273539 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.466677 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.472518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.472571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:37 crc kubenswrapper[4739]: I0218 14:02:37.990318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:37 crc kubenswrapper[4739]: I0218 14:02:37.990374 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.035247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.405681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.406016 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.427574 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.513358 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.516000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.587208 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.671904 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.672544 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5j8b" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" containerID="cri-o://d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" gracePeriod=2 Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.858109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.858157 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.917561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.046369 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.138175 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities" (OuterVolumeSpecName: "utilities") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.142849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz" (OuterVolumeSpecName: "kube-api-access-wccvz") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "kube-api-access-wccvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.205433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238292 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238302 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460213 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" exitCode=0 Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460281 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd"} Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460418 4739 scope.go:117] "RemoveContainer" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.475302 4739 scope.go:117] "RemoveContainer" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.491672 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.493388 4739 scope.go:117] "RemoveContainer" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.494305 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.518650 4739 scope.go:117] "RemoveContainer" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.519166 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": container with ID starting with d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6 not found: ID does not exist" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519209 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} err="failed to get container status \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": rpc error: code = NotFound desc = could not find container \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": container with ID starting with d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6 not found: ID does not exist" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519288 4739 scope.go:117] "RemoveContainer" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.519754 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": container with ID starting with 0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6 not found: ID does not exist" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519804 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6"} err="failed to get container status \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": rpc error: code = NotFound desc = could not find container \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": container with ID starting with 0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6 not found: ID does not exist" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519831 4739 scope.go:117] "RemoveContainer" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.520272 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": container with ID starting with 0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5 not found: ID does not exist" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.520315 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5"} err="failed to get container status \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": rpc error: code = NotFound desc = could not find container \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": container with ID starting with 0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5 not found: ID does not exist" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.416606 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" path="/var/lib/kubelet/pods/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2/volumes" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782652 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-utilities" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782930 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-utilities" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782944 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782972 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782982 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782990 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.783005 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-content" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783013 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-content" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783138 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783149 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783157 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.787701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.788100 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.796932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.862400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.862459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.982028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.072157 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.072732 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fst2x" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" containerID="cri-o://d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" gracePeriod=2 Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.110780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.335012 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.476974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerStarted","Data":"4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.484815 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerStarted","Data":"e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.488108 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" exitCode=0 Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.488154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.501392 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.501374198 podStartE2EDuration="2.501374198s" podCreationTimestamp="2026-02-18 14:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:42.498376481 +0000 UTC m=+194.994097403" watchObservedRunningTime="2026-02-18 14:02:42.501374198 +0000 UTC m=+194.997095120" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.818327 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887156 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.888328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities" (OuterVolumeSpecName: "utilities") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.896625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg" (OuterVolumeSpecName: "kube-api-access-7v8dg") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "kube-api-access-7v8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.920618 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988295 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988327 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988337 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.496222 4739 generic.go:334] "Generic (PLEG): container finished" podID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerID="e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0" exitCode=0 Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.496367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerDied","Data":"e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.499409 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc" exitCode=0 Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.499497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.503418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507874 4739 scope.go:117] "RemoveContainer" containerID="d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507990 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.537660 4739 scope.go:117] "RemoveContainer" containerID="c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.587796 4739 scope.go:117] "RemoveContainer" containerID="d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.590778 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.596553 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.425717 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" path="/var/lib/kubelet/pods/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6/volumes" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.519603 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerStarted","Data":"b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62"} Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.522268 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" exitCode=0 Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.522317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.545320 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8kkn" podStartSLOduration=3.58545342 podStartE2EDuration="49.545255742s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.982591263 +0000 UTC m=+150.478312185" lastFinishedPulling="2026-02-18 14:02:43.942393585 +0000 UTC m=+196.438114507" observedRunningTime="2026-02-18 14:02:44.539085413 +0000 UTC m=+197.034806345" watchObservedRunningTime="2026-02-18 14:02:44.545255742 +0000 UTC m=+197.040976664" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.766774 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbdd0a7f-2264-4d64-a5a7-1665422dc55e" (UID: "bbdd0a7f-2264-4d64-a5a7-1665422dc55e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814823 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.829597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbdd0a7f-2264-4d64-a5a7-1665422dc55e" (UID: "bbdd0a7f-2264-4d64-a5a7-1665422dc55e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.916289 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.533992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerDied","Data":"4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.534296 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.534013 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.536084 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" exitCode=0 Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.536150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.538989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.582560 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccnsw" podStartSLOduration=2.769830766 podStartE2EDuration="47.582534665s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:02:00.086065816 +0000 UTC m=+152.581786728" lastFinishedPulling="2026-02-18 14:02:44.898769705 +0000 UTC m=+197.394490627" observedRunningTime="2026-02-18 14:02:45.580830429 +0000 UTC m=+198.076551371" watchObservedRunningTime="2026-02-18 14:02:45.582534665 +0000 UTC m=+198.078255607" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.023336 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.023382 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.077852 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.547577 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerStarted","Data":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.577763 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47vjm" podStartSLOduration=3.632168132 podStartE2EDuration="51.577746492s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.97548553 +0000 UTC m=+150.471206452" lastFinishedPulling="2026-02-18 14:02:45.92106389 +0000 UTC m=+198.416784812" observedRunningTime="2026-02-18 14:02:46.575206385 +0000 UTC m=+199.070927347" watchObservedRunningTime="2026-02-18 14:02:46.577746492 +0000 UTC m=+199.073467414" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972815 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972839 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972859 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972870 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972888 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-content" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972899 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-content" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972923 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-utilities" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972934 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-utilities" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973083 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973113 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973667 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.979782 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.980044 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:47.990087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055318 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.184021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.323505 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.640049 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.036022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.036340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.569685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerStarted","Data":"37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8"} Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.569727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerStarted","Data":"a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604"} Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.586671 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.58665167 podStartE2EDuration="2.58665167s" podCreationTimestamp="2026-02-18 14:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:49.58251063 +0000 UTC m=+202.078231552" watchObservedRunningTime="2026-02-18 14:02:49.58665167 +0000 UTC m=+202.082372612" Feb 18 14:02:50 crc kubenswrapper[4739]: I0218 14:02:50.094384 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" probeResult="failure" output=< Feb 18 14:02:50 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:02:50 crc kubenswrapper[4739]: > Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.807335 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.808017 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.853883 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:56 crc kubenswrapper[4739]: I0218 14:02:56.068022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:56 crc kubenswrapper[4739]: I0218 14:02:56.721631 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.275904 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.276341 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" containerID="cri-o://b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" gracePeriod=2 Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667389 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" exitCode=0 Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62"} Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144"} Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667597 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.684843 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.824391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.824867 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.825082 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.825825 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities" (OuterVolumeSpecName: "utilities") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.826003 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.829614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g" (OuterVolumeSpecName: "kube-api-access-r242g") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "kube-api-access-r242g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.889331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.927683 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.927720 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.674259 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.700879 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.704675 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.086436 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.126377 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372847 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372922 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372981 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.373660 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.373759 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" gracePeriod=600 Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.686604 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" exitCode=0 Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.686843 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} Feb 18 14:03:00 crc kubenswrapper[4739]: I0218 14:03:00.421699 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" path="/var/lib/kubelet/pods/7ce55882-0feb-4edb-99df-9df2dcb6e62e/volumes" Feb 18 14:03:00 crc kubenswrapper[4739]: I0218 14:03:00.698074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.280201 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.280613 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" containerID="cri-o://8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" gracePeriod=2 Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.673902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720327 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" exitCode=0 Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720497 4739 scope.go:117] "RemoveContainer" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720634 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.744694 4739 scope.go:117] "RemoveContainer" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.770542 4739 scope.go:117] "RemoveContainer" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791354 4739 scope.go:117] "RemoveContainer" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.791875 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": container with ID starting with 8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609 not found: ID does not exist" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791919 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} err="failed to get container status \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": rpc error: code = NotFound desc = could not find container \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": container with ID starting with 8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609 not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791947 4739 scope.go:117] "RemoveContainer" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.792263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": container with ID starting with ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb not found: ID does not exist" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792297 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} err="failed to get container status \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": rpc error: code = NotFound desc = could not find container \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": container with ID starting with ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792318 4739 scope.go:117] "RemoveContainer" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.792653 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": container with ID starting with c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b not found: ID does not exist" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792715 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b"} err="failed to get container status \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": rpc error: code = NotFound desc = could not find container \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": container with ID starting with c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795317 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.797094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities" (OuterVolumeSpecName: "utilities") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.804195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm" (OuterVolumeSpecName: "kube-api-access-qt9pm") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "kube-api-access-qt9pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.897061 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.897120 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.957727 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.998801 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.048222 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.050709 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.475925 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" containerID="cri-o://2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" gracePeriod=15 Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.732527 4739 generic.go:334] "Generic (PLEG): container finished" podID="663bc659-8603-490f-9b6e-7ffe14960463" containerID="2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" exitCode=0 Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.732630 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerDied","Data":"2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706"} Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.886762 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014377 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014415 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014479 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014563 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014664 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014714 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014771 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014834 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014966 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.021061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j" (OuterVolumeSpecName: "kube-api-access-zq67j") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "kube-api-access-zq67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.026047 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.028631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.029613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030005 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030327 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.031088 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.031287 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116482 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116850 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116871 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116890 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116908 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116923 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116940 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116959 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116975 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116989 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117002 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117016 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117033 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117050 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.426098 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7772552e-1443-4f54-a50c-a73f55863363" path="/var/lib/kubelet/pods/7772552e-1443-4f54-a50c-a73f55863363/volumes" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744004 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerDied","Data":"39ed9908fc06adc6beaf03f5a0f7a7f9cb74f347fecb397c807b3e8019f3cdd9"} Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744113 4739 scope.go:117] "RemoveContainer" containerID="2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.775048 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.778749 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:03:06 crc kubenswrapper[4739]: I0218 14:03:06.417973 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663bc659-8603-490f-9b6e-7ffe14960463" path="/var/lib/kubelet/pods/663bc659-8603-490f-9b6e-7ffe14960463/volumes" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.613579 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614130 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614173 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614184 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614196 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614206 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614221 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614234 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614256 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614266 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614289 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614300 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614313 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614492 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614518 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614564 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.615981 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621614 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621987 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622011 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622268 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622765 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623569 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625190 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624814 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625685 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625789 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.626153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.639210 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.643076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.646836 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.667561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.728045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.729281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.729369 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.730190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.730426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.734354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.735854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.736361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.736571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.738379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.738580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.739513 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.742014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.757770 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.959327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.245795 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776491 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"d93ce4fea1217ed8b6ec72243e4e8b583cb0fce1aa47f890c4b2eb96721eb3f8"} Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.802323 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podStartSLOduration=31.802296167 podStartE2EDuration="31.802296167s" podCreationTimestamp="2026-02-18 14:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:03:09.801004263 +0000 UTC m=+222.296725245" watchObservedRunningTime="2026-02-18 14:03:09.802296167 +0000 UTC m=+222.298017129" Feb 18 14:03:10 crc kubenswrapper[4739]: I0218 14:03:10.155242 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.605347 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.606997 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607186 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607405 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607596 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607561 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607630 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612083 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612705 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612747 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612785 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612803 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612833 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612849 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612865 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612901 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612916 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612940 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612955 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613177 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613214 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613232 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613253 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.613583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613607 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.738678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739553 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841505 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841410 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841639 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.883085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.884813 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885849 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885880 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885890 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885900 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" exitCode=2 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885986 4739 scope.go:117] "RemoveContainer" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.889293 4739 generic.go:334] "Generic (PLEG): container finished" podID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerID="37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.889328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerDied","Data":"37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8"} Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.890105 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.890347 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: I0218 14:03:27.897791 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.905986 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.906533 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.906895 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907269 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907622 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: I0218 14:03:27.907657 4739 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907950 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 14:03:28 crc kubenswrapper[4739]: E0218 14:03:28.108694 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.127931 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.128537 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.129009 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.261967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262211 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock" (OuterVolumeSpecName: "var-lock") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262783 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.267603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363877 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363895 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.415557 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.416071 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: E0218 14:03:28.510271 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerDied","Data":"a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604"} Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904896 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904692 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.909326 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.007737 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.008849 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.009398 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.009767 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172750 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172813 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274115 4739 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274162 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274174 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.312028 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.754333 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.754853 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.755637 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756429 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756871 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756904 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.914679 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915437 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" exitCode=0 Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915511 4739 scope.go:117] "RemoveContainer" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915582 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.932606 4739 scope.go:117] "RemoveContainer" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.933348 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.933847 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.949456 4739 scope.go:117] "RemoveContainer" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.965920 4739 scope.go:117] "RemoveContainer" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.980575 4739 scope.go:117] "RemoveContainer" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.999811 4739 scope.go:117] "RemoveContainer" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.028146 4739 scope.go:117] "RemoveContainer" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.029284 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": container with ID starting with 4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db not found: ID does not exist" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.029336 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db"} err="failed to get container status \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": rpc error: code = NotFound desc = could not find container \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": container with ID starting with 4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.029369 4739 scope.go:117] "RemoveContainer" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.029913 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": container with ID starting with 897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59 not found: ID does not exist" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030018 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59"} err="failed to get container status \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": rpc error: code = NotFound desc = could not find container \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": container with ID starting with 897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59 not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030100 4739 scope.go:117] "RemoveContainer" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.030677 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": container with ID starting with 132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990 not found: ID does not exist" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030771 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990"} err="failed to get container status \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": rpc error: code = NotFound desc = could not find container \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": container with ID starting with 132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990 not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030849 4739 scope.go:117] "RemoveContainer" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.031208 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": container with ID starting with c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e not found: ID does not exist" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.031245 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e"} err="failed to get container status \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": rpc error: code = NotFound desc = could not find container \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": container with ID starting with c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.031268 4739 scope.go:117] "RemoveContainer" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.032202 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": container with ID starting with 6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc not found: ID does not exist" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.032312 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc"} err="failed to get container status \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": rpc error: code = NotFound desc = could not find container \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": container with ID starting with 6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.032382 4739 scope.go:117] "RemoveContainer" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.034022 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": container with ID starting with 6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a not found: ID does not exist" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.034092 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a"} err="failed to get container status \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": rpc error: code = NotFound desc = could not find container \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": container with ID starting with 6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.416923 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.913111 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 14:03:31 crc kubenswrapper[4739]: E0218 14:03:31.651204 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:31 crc kubenswrapper[4739]: I0218 14:03:31.652071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:31 crc kubenswrapper[4739]: W0218 14:03:31.682636 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed WatchSource:0}: Error finding container ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed: Status 404 returned error can't find the container with id ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed Feb 18 14:03:31 crc kubenswrapper[4739]: E0218 14:03:31.686231 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955c352057568e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 14:03:31.68577499 +0000 UTC m=+244.181495912,LastTimestamp:2026-02-18 14:03:31.68577499 +0000 UTC m=+244.181495912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 14:03:31 crc kubenswrapper[4739]: I0218 14:03:31.938757 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed"} Feb 18 14:03:32 crc kubenswrapper[4739]: I0218 14:03:32.944752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf"} Feb 18 14:03:32 crc kubenswrapper[4739]: E0218 14:03:32.945323 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:32 crc kubenswrapper[4739]: I0218 14:03:32.945536 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:33 crc kubenswrapper[4739]: E0218 14:03:33.951150 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:34 crc kubenswrapper[4739]: E0218 14:03:34.113647 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.409421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.410473 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.434556 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.434872 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.435471 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.436095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.484200 4739 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" volumeName="registry-storage" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972796 4739 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1395b979aac8e3d361decbe0ed7edf0aa760b49b8dee8acf52ecff93a1f3beb3" exitCode=0 Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1395b979aac8e3d361decbe0ed7edf0aa760b49b8dee8acf52ecff93a1f3beb3"} Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972920 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dcb4253475aa835dd8e8c53b9cb2c47800a6a7067c1d42e34f079c6cff7a10e2"} Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973206 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973221 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.973612 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973741 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.985467 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"62a20bd569adc675b5afe07886789f29380b0a42724bcb48190d65aec0c20952"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6027817e8a32cf8e088506b9b11537d1b4801406d7ab6b01d19a01b37a69bac6"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9d4aa79db9145f305e0bc340074d31c99dbd9f3e5d3aad01ea2a4455bd4cd201"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e181c6dded8617100a39708c785c0996b5ab79691882170d631958b2cca9c9e"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.992901 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.992949 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" exitCode=1 Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.993002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.993427 4739 scope.go:117] "RemoveContainer" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.998728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d6095c05b40e14712a4f26388402ba5fb295b0972a796470a65ef0491aa781a7"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.998992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.999092 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.999127 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.006960 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.007348 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd"} Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.648664 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.652584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.012733 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.436682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.436723 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.444709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.005635 4739 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.027695 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.027722 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.032563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.035950 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0baef8ef-0291-449b-b6a9-b7e8c8eae0ae" Feb 18 14:03:46 crc kubenswrapper[4739]: I0218 14:03:46.032705 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:46 crc kubenswrapper[4739]: I0218 14:03:46.032733 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:48 crc kubenswrapper[4739]: I0218 14:03:48.432794 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0baef8ef-0291-449b-b6a9-b7e8c8eae0ae" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.343871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.505317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.746642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.084979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.267271 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.320112 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.355966 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.478109 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.777185 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.838385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.968281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.175465 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.298387 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.344179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.570729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.736229 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.779931 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.802288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.842555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.968926 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.136553 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.253216 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.349552 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.401161 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.603975 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.813546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.926299 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.267279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.551310 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.944689 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.979356 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.663180 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.707974 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.770989 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.897177 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.943318 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.006106 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.020179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.193611 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.288701 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.728736 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.912569 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.401610 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.542168 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.848185 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.913190 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.956860 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.000801 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.008571 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.008647 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.015599 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.041482 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.041460691 podStartE2EDuration="14.041460691s" podCreationTimestamp="2026-02-18 14:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:03:59.037754647 +0000 UTC m=+271.533475639" watchObservedRunningTime="2026-02-18 14:03:59.041460691 +0000 UTC m=+271.537181623" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.083360 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.166078 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.429027 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.622584 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.676186 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.805120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.871026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.871884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.936374 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.980638 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.328399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.439367 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.557651 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.726511 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.843116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.993751 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.064878 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.097322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.219369 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.231972 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.303701 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.327198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.367664 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.376220 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.617774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.641032 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.685139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.738910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.753081 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.755952 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.778214 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.813282 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.831314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.889956 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.897787 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.904799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.926543 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.950589 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.969916 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.977494 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.099985 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.515370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.524860 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.593433 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.661855 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.662617 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.682875 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.719810 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.722623 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.741985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.790547 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.807334 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.887496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.905062 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.945128 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.074715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.104288 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.142394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.201223 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.220842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.235815 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.253411 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.341425 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.376179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.675784 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.730421 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.772616 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.794034 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.823083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.847533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.860024 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.882621 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.004568 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.046437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.050890 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.111204 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.136852 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.176545 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.448918 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.516973 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.703565 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.705191 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.705846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.853385 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.877225 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.901917 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.922955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.975138 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.118669 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.181992 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.271622 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.378714 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.425668 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.483005 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.506733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.517419 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.531225 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.549132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.573123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.690132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.737562 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.743785 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.864495 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.871429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.072227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.174643 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.211432 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.220537 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.223809 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.323630 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.583493 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.615704 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.717962 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.773241 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.826669 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.862631 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.017536 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.185592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.189680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.242551 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.248310 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.314576 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.325253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.376438 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.541992 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.570690 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.570897 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" gracePeriod=5 Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.583679 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.641607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.663789 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.726244 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.745075 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.750347 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.870196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.873979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.892033 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.930915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.978558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.987594 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.987847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.148686 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.152664 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.198161 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.225871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.303156 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.331339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.404883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.434369 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.462460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.484761 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.516730 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.584892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.649762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.705090 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.814549 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.827064 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.892549 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.894376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.906923 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.972196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.238496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.278125 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.286702 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.287153 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.357059 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.527112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.701559 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.760076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.767967 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.772087 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.783288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.935812 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.938367 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.961647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.060039 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.389329 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.411796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.549428 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.570851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.742158 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.765731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.875843 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.914754 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.055547 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.145223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.212613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.235795 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.314923 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.519683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.750840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.754520 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.825482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.975403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.032006 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.044493 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.097613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.144107 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.336399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.699649 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.919256 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.951379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.068357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.148469 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.148549 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.153776 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212895 4739 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" exitCode=137 Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212944 4739 scope.go:117] "RemoveContainer" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.235900 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.244160 4739 scope.go:117] "RemoveContainer" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: E0218 14:04:13.244948 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": container with ID starting with 0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf not found: ID does not exist" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.244996 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf"} err="failed to get container status \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": rpc error: code = NotFound desc = could not find container \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": container with ID starting with 0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf not found: ID does not exist" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257253 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257428 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257655 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257679 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257787 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258083 4739 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258114 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258139 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258165 4739 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.267678 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.268172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.359753 4739 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.644991 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:04:14 crc kubenswrapper[4739]: I0218 14:04:14.417437 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 14:04:15 crc kubenswrapper[4739]: I0218 14:04:15.325648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.163548 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.164526 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ch5b" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" containerID="cri-o://44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.170589 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.170953 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" containerID="cri-o://2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.192223 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.192592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" containerID="cri-o://de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.201977 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.202272 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wznkg" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" containerID="cri-o://1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.208677 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.209099 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm56z" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" containerID="cri-o://91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230191 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:17 crc kubenswrapper[4739]: E0218 14:04:17.230681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230778 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: E0218 14:04:17.230896 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230974 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231174 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231816 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.243219 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313630 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415739 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.416946 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.420761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.431254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.606136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.610379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.614297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.618563 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.623295 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.627664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723028 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723135 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723171 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723225 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726152 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726637 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726678 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726695 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727060 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities" (OuterVolumeSpecName: "utilities") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727148 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727329 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities" (OuterVolumeSpecName: "utilities") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities" (OuterVolumeSpecName: "utilities") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727583 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities" (OuterVolumeSpecName: "utilities") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.728013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq" (OuterVolumeSpecName: "kube-api-access-78txq") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "kube-api-access-78txq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.729176 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78" (OuterVolumeSpecName: "kube-api-access-hwf78") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "kube-api-access-hwf78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.729249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr" (OuterVolumeSpecName: "kube-api-access-gg4gr") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "kube-api-access-gg4gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.730399 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms" (OuterVolumeSpecName: "kube-api-access-nbcms") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "kube-api-access-nbcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.738320 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j" (OuterVolumeSpecName: "kube-api-access-w2c4j") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "kube-api-access-w2c4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.772403 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.796847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.803074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828513 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828551 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828562 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828570 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828578 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828588 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828599 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828607 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828615 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828623 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828631 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828641 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828649 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.894713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.930092 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.026976 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:18 crc kubenswrapper[4739]: W0218 14:04:18.034268 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dc6acff_649a_4e95_ba42_ad79dae4a787.slice/crio-e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28 WatchSource:0}: Error finding container e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28: Status 404 returned error can't find the container with id e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247318 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.248641 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"ec2d2f157f528c4b55bc8096e827bd5672ec6bdfb957669781807b88427d0279"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.248675 4739 scope.go:117] "RemoveContainer" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247557 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254040 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"e5127c0ff7f429af7d0aca6c5c08ea2c05b6bea576e6c38224ce6837bef827fc"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.264661 4739 scope.go:117] "RemoveContainer" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.266812 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.266929 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272226 4739 generic.go:334] "Generic (PLEG): container finished" podID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272286 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerDied","Data":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272500 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerDied","Data":"6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275650 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275784 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275829 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"59dbe1e3611ef825eb60e8c102d83aabfcf6d0ed72189d4427096a9698a93bb3"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293234 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293391 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293403 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.304929 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podStartSLOduration=1.304903953 podStartE2EDuration="1.304903953s" podCreationTimestamp="2026-02-18 14:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:04:18.291352651 +0000 UTC m=+290.787073583" watchObservedRunningTime="2026-02-18 14:04:18.304903953 +0000 UTC m=+290.800624895" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.309718 4739 scope.go:117] "RemoveContainer" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.348807 4739 scope.go:117] "RemoveContainer" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.349511 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": container with ID starting with 91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8 not found: ID does not exist" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.349570 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} err="failed to get container status \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": rpc error: code = NotFound desc = could not find container \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": container with ID starting with 91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.349602 4739 scope.go:117] "RemoveContainer" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.350397 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": container with ID starting with d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c not found: ID does not exist" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.350462 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} err="failed to get container status \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": rpc error: code = NotFound desc = could not find container \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": container with ID starting with d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.350493 4739 scope.go:117] "RemoveContainer" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.351234 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": container with ID starting with 9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242 not found: ID does not exist" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.351262 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242"} err="failed to get container status \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": rpc error: code = NotFound desc = could not find container \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": container with ID starting with 9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.351287 4739 scope.go:117] "RemoveContainer" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.353206 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.357830 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.373013 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.374203 4739 scope.go:117] "RemoveContainer" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.380609 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.387393 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.390699 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.407642 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.410290 4739 scope.go:117] "RemoveContainer" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.418971 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" path="/var/lib/kubelet/pods/692fafe2-8be1-4359-8a74-f8916c8f6d55/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.420239 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" path="/var/lib/kubelet/pods/a44b0172-9ef1-4181-8380-bfe703bdc50d/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.421013 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7549289-fee3-4211-b340-731ff70593d1" path="/var/lib/kubelet/pods/a7549289-fee3-4211-b340-731ff70593d1/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.422222 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.422252 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.424688 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426105 4739 scope.go:117] "RemoveContainer" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.426518 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": container with ID starting with 44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5 not found: ID does not exist" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426548 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} err="failed to get container status \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": rpc error: code = NotFound desc = could not find container \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": container with ID starting with 44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426566 4739 scope.go:117] "RemoveContainer" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.426809 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": container with ID starting with e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662 not found: ID does not exist" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426846 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662"} err="failed to get container status \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": rpc error: code = NotFound desc = could not find container \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": container with ID starting with e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426877 4739 scope.go:117] "RemoveContainer" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.427130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": container with ID starting with 4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608 not found: ID does not exist" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.427162 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608"} err="failed to get container status \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": rpc error: code = NotFound desc = could not find container \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": container with ID starting with 4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.427179 4739 scope.go:117] "RemoveContainer" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445159 4739 scope.go:117] "RemoveContainer" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.445620 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": container with ID starting with de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1 not found: ID does not exist" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445648 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} err="failed to get container status \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": rpc error: code = NotFound desc = could not find container \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": container with ID starting with de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445672 4739 scope.go:117] "RemoveContainer" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.460419 4739 scope.go:117] "RemoveContainer" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.476253 4739 scope.go:117] "RemoveContainer" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498335 4739 scope.go:117] "RemoveContainer" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.498903 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": container with ID starting with 2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa not found: ID does not exist" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498946 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} err="failed to get container status \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": rpc error: code = NotFound desc = could not find container \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": container with ID starting with 2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498968 4739 scope.go:117] "RemoveContainer" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.499600 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": container with ID starting with e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10 not found: ID does not exist" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.499623 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10"} err="failed to get container status \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": rpc error: code = NotFound desc = could not find container \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": container with ID starting with e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.499637 4739 scope.go:117] "RemoveContainer" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.500626 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": container with ID starting with 551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e not found: ID does not exist" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.500677 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e"} err="failed to get container status \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": rpc error: code = NotFound desc = could not find container \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": container with ID starting with 551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.500709 4739 scope.go:117] "RemoveContainer" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.512803 4739 scope.go:117] "RemoveContainer" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.530998 4739 scope.go:117] "RemoveContainer" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.547657 4739 scope.go:117] "RemoveContainer" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548063 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": container with ID starting with 1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8 not found: ID does not exist" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548094 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} err="failed to get container status \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": rpc error: code = NotFound desc = could not find container \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": container with ID starting with 1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548117 4739 scope.go:117] "RemoveContainer" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548527 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": container with ID starting with 8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c not found: ID does not exist" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548560 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c"} err="failed to get container status \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": rpc error: code = NotFound desc = could not find container \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": container with ID starting with 8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548578 4739 scope.go:117] "RemoveContainer" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548817 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": container with ID starting with 9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64 not found: ID does not exist" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548840 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64"} err="failed to get container status \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": rpc error: code = NotFound desc = could not find container \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": container with ID starting with 9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64 not found: ID does not exist" Feb 18 14:04:19 crc kubenswrapper[4739]: I0218 14:04:19.312427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:20 crc kubenswrapper[4739]: I0218 14:04:20.416155 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" path="/var/lib/kubelet/pods/6955631f-9981-47a5-8ecb-8756df4e0256/volumes" Feb 18 14:04:20 crc kubenswrapper[4739]: I0218 14:04:20.417794 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" path="/var/lib/kubelet/pods/c43a59b1-306c-4a0e-9f9f-fad2e9082d55/volumes" Feb 18 14:04:28 crc kubenswrapper[4739]: I0218 14:04:28.085033 4739 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.636634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637371 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637386 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637402 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637410 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637422 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637430 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637530 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637545 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637555 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637564 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637574 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637581 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637593 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637602 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637612 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637620 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637630 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637637 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637655 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637667 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637674 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637685 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637692 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637705 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637712 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637815 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637827 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637848 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637859 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637870 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.638263 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.644594 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.650394 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.653867 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.654853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.659085 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.676562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.707800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.708035 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.708297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.812365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.821080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.842108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.969956 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:49 crc kubenswrapper[4739]: I0218 14:04:49.439065 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:50 crc kubenswrapper[4739]: I0218 14:04:50.464156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" event={"ID":"60fdcb8b-f362-4d6b-981a-aad2da285f70","Type":"ContainerStarted","Data":"ba20ef6e135638d511d72a2468e17e0b85632cac4307e195fb0a85a3620f776b"} Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.470836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" event={"ID":"60fdcb8b-f362-4d6b-981a-aad2da285f70","Type":"ContainerStarted","Data":"2d36deca78ce76e5bc7e10d8272e9998c1dd07ad4a61815e895f09527aca3787"} Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.492847 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" podStartSLOduration=1.6443391269999998 podStartE2EDuration="3.492817686s" podCreationTimestamp="2026-02-18 14:04:48 +0000 UTC" firstStartedPulling="2026-02-18 14:04:49.463822189 +0000 UTC m=+321.959543121" lastFinishedPulling="2026-02-18 14:04:51.312300758 +0000 UTC m=+323.808021680" observedRunningTime="2026-02-18 14:04:51.48821351 +0000 UTC m=+323.983934442" watchObservedRunningTime="2026-02-18 14:04:51.492817686 +0000 UTC m=+323.988538628" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.941477 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.943287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.946128 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.947740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.948275 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:04:52 crc kubenswrapper[4739]: I0218 14:04:52.049642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.049801 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.049923 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:52.549895404 +0000 UTC m=+325.045616366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: I0218 14:04:52.555479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.555712 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.555808 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:53.555780159 +0000 UTC m=+326.051501151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:53 crc kubenswrapper[4739]: I0218 14:04:53.566745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:53 crc kubenswrapper[4739]: E0218 14:04:53.566935 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:53 crc kubenswrapper[4739]: E0218 14:04:53.567750 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:55.567723468 +0000 UTC m=+328.063444430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:55 crc kubenswrapper[4739]: I0218 14:04:55.592073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:55 crc kubenswrapper[4739]: E0218 14:04:55.592229 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:55 crc kubenswrapper[4739]: E0218 14:04:55.592483 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:59.592467473 +0000 UTC m=+332.088188395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.372818 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.372899 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.642679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:59 crc kubenswrapper[4739]: E0218 14:04:59.642926 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:59 crc kubenswrapper[4739]: E0218 14:04:59.643036 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:07.643008728 +0000 UTC m=+340.138729680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:07 crc kubenswrapper[4739]: I0218 14:05:07.698184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:07 crc kubenswrapper[4739]: E0218 14:05:07.698357 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:07 crc kubenswrapper[4739]: E0218 14:05:07.698900 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:23.698882076 +0000 UTC m=+356.194602998 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.148046 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.148592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" containerID="cri-o://2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" gracePeriod=30 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.269255 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.269782 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" containerID="cri-o://8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" gracePeriod=30 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.487681 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573765 4739 generic.go:334] "Generic (PLEG): container finished" podID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" exitCode=0 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerDied","Data":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerDied","Data":"1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573887 4739 scope.go:117] "RemoveContainer" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573989 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.575963 4739 generic.go:334] "Generic (PLEG): container finished" podID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerID="8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" exitCode=0 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.576003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerDied","Data":"8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.593234 4739 scope.go:117] "RemoveContainer" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: E0218 14:05:10.595924 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": container with ID starting with 2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418 not found: ID does not exist" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.596942 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} err="failed to get container status \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": rpc error: code = NotFound desc = could not find container \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": container with ID starting with 2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418 not found: ID does not exist" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.622305 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.639670 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca" (OuterVolumeSpecName: "client-ca") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.639789 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config" (OuterVolumeSpecName: "config") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.640275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.646123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.646254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc" (OuterVolumeSpecName: "kube-api-access-drssc") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "kube-api-access-drssc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740580 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config" (OuterVolumeSpecName: "config") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741501 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca" (OuterVolumeSpecName: "client-ca") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741935 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742101 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742236 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742884 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743039 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743185 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743371 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743998 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq" (OuterVolumeSpecName: "kube-api-access-2bdfq") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "kube-api-access-2bdfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.744090 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.846161 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.846217 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.914618 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.919995 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerDied","Data":"fae6dc1b6a99284726a5c316e9b142133b64b76e06f03661a6baf4b3e9620752"} Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586461 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586511 4739 scope.go:117] "RemoveContainer" containerID="8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.620360 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.637040 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674569 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:11 crc kubenswrapper[4739]: E0218 14:05:11.674811 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674826 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: E0218 14:05:11.674837 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674960 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674970 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.675396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.677381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.677985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678116 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678281 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678410 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.681740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.684491 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.685102 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.687431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.689762 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.690030 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.691143 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693324 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693506 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693650 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.694882 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693846 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757203 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757352 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.858440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859574 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859911 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.861142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.861769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.862181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.865081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.866803 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.880778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.882396 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.010778 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.018796 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.205299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.242628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.417259 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" path="/var/lib/kubelet/pods/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713/volumes" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.417920 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" path="/var/lib/kubelet/pods/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39/volumes" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerStarted","Data":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerStarted","Data":"859379702eb5973733471369b3a7d9b5d3eb03bf0ee5ef2eb69a21d044e09a3e"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593533 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerStarted","Data":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerStarted","Data":"b1b185d98c36c27c5d4462426e4a18d83db79ec0473ca9aef0bf6917797ee642"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.599376 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.611797 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" podStartSLOduration=2.611780084 podStartE2EDuration="2.611780084s" podCreationTimestamp="2026-02-18 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:12.611069856 +0000 UTC m=+345.106790778" watchObservedRunningTime="2026-02-18 14:05:12.611780084 +0000 UTC m=+345.107501006" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.657316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" podStartSLOduration=2.6572976539999997 podStartE2EDuration="2.657297654s" podCreationTimestamp="2026-02-18 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:12.655253353 +0000 UTC m=+345.150974275" watchObservedRunningTime="2026-02-18 14:05:12.657297654 +0000 UTC m=+345.153018576" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.830805 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:23 crc kubenswrapper[4739]: I0218 14:05:23.721251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:23 crc kubenswrapper[4739]: E0218 14:05:23.721428 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:23 crc kubenswrapper[4739]: E0218 14:05:23.722046 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:55.722021831 +0000 UTC m=+388.217742753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.068042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.069048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.071528 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.081555 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139526 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241123 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241862 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.264142 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.265313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.265351 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.269538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.275069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.342788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.343176 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.343208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.396835 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444822 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.445435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.445600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.469128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.603155 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.859425 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: W0218 14:05:25.865333 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb612bd_4974_4e9b_91d7_0240ce057aa5.slice/crio-81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca WatchSource:0}: Error finding container 81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca: Status 404 returned error can't find the container with id 81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.038097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:26 crc kubenswrapper[4739]: W0218 14:05:26.055767 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc54472_7fa4_457e_a332_420ce4a7da93.slice/crio-0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c WatchSource:0}: Error finding container 0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c: Status 404 returned error can't find the container with id 0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695312 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff" exitCode=0 Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700109 4739 generic.go:334] "Generic (PLEG): container finished" podID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerID="2d78331716a2f84a755f4a350cf5232ec80ebedd83e8ac65ef7e623049513e2d" exitCode=0 Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerDied","Data":"2d78331716a2f84a755f4a350cf5232ec80ebedd83e8ac65ef7e623049513e2d"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.474164 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.475079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: W0218 14:05:27.477157 4739 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 18 14:05:27 crc kubenswrapper[4739]: E0218 14:05:27.477316 4739 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.517876 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.580751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.581094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.581156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.684356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.695053 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.696408 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.699090 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.704517 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.705629 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.708080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.710333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.885131 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.885307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.905289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.099894 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.317918 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.317958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.537960 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.717278 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.717360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719078 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerStarted","Data":"9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722426 4739 generic.go:334] "Generic (PLEG): container finished" podID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerID="0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerDied","Data":"0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"34c0039c5c354e86e2b1d1d3fbf6d5fcc9f2e4f0b922df5cb3730e4347df63f4"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.769482 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.771557 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4z7n" podStartSLOduration=2.313102586 podStartE2EDuration="3.771533453s" podCreationTimestamp="2026-02-18 14:05:25 +0000 UTC" firstStartedPulling="2026-02-18 14:05:26.701839592 +0000 UTC m=+359.197560524" lastFinishedPulling="2026-02-18 14:05:28.160270469 +0000 UTC m=+360.655991391" observedRunningTime="2026-02-18 14:05:28.768002234 +0000 UTC m=+361.263723166" watchObservedRunningTime="2026-02-18 14:05:28.771533453 +0000 UTC m=+361.267254385" Feb 18 14:05:28 crc kubenswrapper[4739]: W0218 14:05:28.776581 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ff243b_1f5d_4ab1_af8c_38a98b870d27.slice/crio-8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2 WatchSource:0}: Error finding container 8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2: Status 404 returned error can't find the container with id 8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2 Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.372731 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.373393 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729030 4739 generic.go:334] "Generic (PLEG): container finished" podID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerID="9c52441e88eb1150b26b8bccb866ea4c8f6076109e0e9fe0290ac66f558571ef" exitCode=0 Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerDied","Data":"9c52441e88eb1150b26b8bccb866ea4c8f6076109e0e9fe0290ac66f558571ef"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729130 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerStarted","Data":"8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.734569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.783968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5478" podStartSLOduration=2.35602148 podStartE2EDuration="4.783942535s" podCreationTimestamp="2026-02-18 14:05:25 +0000 UTC" firstStartedPulling="2026-02-18 14:05:26.699920744 +0000 UTC m=+359.195641686" lastFinishedPulling="2026-02-18 14:05:29.127841819 +0000 UTC m=+361.623562741" observedRunningTime="2026-02-18 14:05:29.775190506 +0000 UTC m=+362.270911428" watchObservedRunningTime="2026-02-18 14:05:29.783942535 +0000 UTC m=+362.279663457" Feb 18 14:05:30 crc kubenswrapper[4739]: I0218 14:05:30.163663 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:30 crc kubenswrapper[4739]: I0218 14:05:30.164248 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" containerID="cri-o://a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" gracePeriod=30 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.732686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742747 4739 generic.go:334] "Generic (PLEG): container finished" podID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742814 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerDied","Data":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerDied","Data":"859379702eb5973733471369b3a7d9b5d3eb03bf0ee5ef2eb69a21d044e09a3e"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742914 4739 scope.go:117] "RemoveContainer" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.751258 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.751305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.774526 4739 scope.go:117] "RemoveContainer" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: E0218 14:05:30.781653 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": container with ID starting with a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b not found: ID does not exist" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.781734 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} err="failed to get container status \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": rpc error: code = NotFound desc = could not find container \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": container with ID starting with a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b not found: ID does not exist" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821711 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821752 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821888 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821919 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823276 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config" (OuterVolumeSpecName: "config") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823483 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca" (OuterVolumeSpecName: "client-ca") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.829589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54" (OuterVolumeSpecName: "kube-api-access-fxp54") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "kube-api-access-fxp54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.829835 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.923971 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924022 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924049 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924070 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924088 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.139926 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.147600 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699112 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:31 crc kubenswrapper[4739]: E0218 14:05:31.699644 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699667 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699800 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.700237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.702143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.702804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.703299 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.703633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.704021 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.704810 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.715340 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.769069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.789006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerStarted","Data":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.792395 4739 generic.go:334] "Generic (PLEG): container finished" podID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerID="a91a155362822a8fd7463aee53a03bcce527fc96711ba76c83d138a4ccc3acb5" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.792454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerDied","Data":"a91a155362822a8fd7463aee53a03bcce527fc96711ba76c83d138a4ccc3acb5"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.809517 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94tzm" podStartSLOduration=2.381209286 podStartE2EDuration="4.809497s" podCreationTimestamp="2026-02-18 14:05:27 +0000 UTC" firstStartedPulling="2026-02-18 14:05:28.720416072 +0000 UTC m=+361.216136994" lastFinishedPulling="2026-02-18 14:05:31.148703776 +0000 UTC m=+363.644424708" observedRunningTime="2026-02-18 14:05:31.809110801 +0000 UTC m=+364.304831743" watchObservedRunningTime="2026-02-18 14:05:31.809497 +0000 UTC m=+364.305217942" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835861 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.836029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.940971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.964205 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.082235 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.277820 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:32 crc kubenswrapper[4739]: W0218 14:05:32.299982 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0480fc06_58bc_47d0_9446_8eb7ecad6509.slice/crio-a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa WatchSource:0}: Error finding container a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa: Status 404 returned error can't find the container with id a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.422290 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" path="/var/lib/kubelet/pods/386aca13-7178-47f2-bf26-bb78e5c5ff49/volumes" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.799508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerStarted","Data":"c363a555f5e8acbad8a6089a475de441ded4bbf447b365999623b5505b377d45"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.803717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.803770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.842163 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podStartSLOduration=2.84214617 podStartE2EDuration="2.84214617s" podCreationTimestamp="2026-02-18 14:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:32.842091688 +0000 UTC m=+365.337812610" watchObservedRunningTime="2026-02-18 14:05:32.84214617 +0000 UTC m=+365.337867092" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.843474 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6sbz" podStartSLOduration=3.267079208 podStartE2EDuration="5.843469343s" podCreationTimestamp="2026-02-18 14:05:27 +0000 UTC" firstStartedPulling="2026-02-18 14:05:29.73065215 +0000 UTC m=+362.226373072" lastFinishedPulling="2026-02-18 14:05:32.307042285 +0000 UTC m=+364.802763207" observedRunningTime="2026-02-18 14:05:32.820004326 +0000 UTC m=+365.315725248" watchObservedRunningTime="2026-02-18 14:05:32.843469343 +0000 UTC m=+365.339190265" Feb 18 14:05:33 crc kubenswrapper[4739]: I0218 14:05:33.808819 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:33 crc kubenswrapper[4739]: I0218 14:05:33.815417 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.397379 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.397776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.604099 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.604621 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.659870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.873729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:36 crc kubenswrapper[4739]: I0218 14:05:36.442074 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n5478" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" probeResult="failure" output=< Feb 18 14:05:36 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:05:36 crc kubenswrapper[4739]: > Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.100625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.100697 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.172744 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.318137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.318218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.369086 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.898508 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.898972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.295759 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.296986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.320904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418248 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418846 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.441069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520455 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520525 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.525966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.525979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.536647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.548085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.612248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.017495 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" event={"ID":"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02","Type":"ContainerStarted","Data":"c3155104d416a7a43bd0d77b73d5d690686cc7a402700106dc139bd5e68d790f"} Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887521 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" event={"ID":"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02","Type":"ContainerStarted","Data":"1459845147c9ed89a02f07910c0f67b6c852b30c9394b0ad99b1a23571832593"} Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.907619 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" podStartSLOduration=1.907590756 podStartE2EDuration="1.907590756s" podCreationTimestamp="2026-02-18 14:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:44.906334604 +0000 UTC m=+377.402055536" watchObservedRunningTime="2026-02-18 14:05:44.907590756 +0000 UTC m=+377.403311708" Feb 18 14:05:45 crc kubenswrapper[4739]: I0218 14:05:45.467240 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:45 crc kubenswrapper[4739]: I0218 14:05:45.506318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.165480 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.166226 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" containerID="cri-o://0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" gracePeriod=30 Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.653786 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736509 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.737353 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca" (OuterVolumeSpecName: "client-ca") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.737398 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config" (OuterVolumeSpecName: "config") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.742364 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.742885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r" (OuterVolumeSpecName: "kube-api-access-24w9r") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "kube-api-access-24w9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838215 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838264 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838274 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838284 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.928891 4739 generic.go:334] "Generic (PLEG): container finished" podID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" exitCode=0 Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.928958 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerDied","Data":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerDied","Data":"b1b185d98c36c27c5d4462426e4a18d83db79ec0473ca9aef0bf6917797ee642"} Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929035 4739 scope.go:117] "RemoveContainer" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929207 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.965382 4739 scope.go:117] "RemoveContainer" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: E0218 14:05:50.965978 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": container with ID starting with 0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446 not found: ID does not exist" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.970213 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} err="failed to get container status \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": rpc error: code = NotFound desc = could not find container \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": container with ID starting with 0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446 not found: ID does not exist" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.986339 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.992190 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.713412 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:51 crc kubenswrapper[4739]: E0218 14:05:51.714302 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.714362 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.715746 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.717293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.720820 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.720988 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.722409 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.722658 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.723098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.724019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.732327 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755880 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.858029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.860077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.863163 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.873885 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.888321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.047256 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.418674 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" path="/var/lib/kubelet/pods/adb7e32d-b0a0-48cd-9bd0-03a390dcead5/volumes" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.530972 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:52 crc kubenswrapper[4739]: W0218 14:05:52.536263 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8166ccce_dd66_40c5_aed1_8f560c573a6e.slice/crio-fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c WatchSource:0}: Error finding container fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c: Status 404 returned error can't find the container with id fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.946834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.947397 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.947421 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c"} Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.979679 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podStartSLOduration=2.979649659 podStartE2EDuration="2.979649659s" podCreationTimestamp="2026-02-18 14:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:52.97410014 +0000 UTC m=+385.469821102" watchObservedRunningTime="2026-02-18 14:05:52.979649659 +0000 UTC m=+385.475370621" Feb 18 14:05:53 crc kubenswrapper[4739]: I0218 14:05:53.304057 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.727386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.739794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.859290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:56 crc kubenswrapper[4739]: I0218 14:05:56.132956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:05:56 crc kubenswrapper[4739]: W0218 14:05:56.140755 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e9543b_d10d_461c_8751_99e53b680e1c.slice/crio-7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e WatchSource:0}: Error finding container 7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e: Status 404 returned error can't find the container with id 7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e Feb 18 14:05:56 crc kubenswrapper[4739]: I0218 14:05:56.972886 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e"} Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.981587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.982000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.987910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.997979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podStartSLOduration=65.77264438 podStartE2EDuration="1m6.997957388s" podCreationTimestamp="2026-02-18 14:04:51 +0000 UTC" firstStartedPulling="2026-02-18 14:05:56.142660269 +0000 UTC m=+388.638381191" lastFinishedPulling="2026-02-18 14:05:57.367973257 +0000 UTC m=+389.863694199" observedRunningTime="2026-02-18 14:05:57.99761424 +0000 UTC m=+390.493335182" watchObservedRunningTime="2026-02-18 14:05:57.997957388 +0000 UTC m=+390.493678360" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.004098 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.006777 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.008815 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.009297 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.010268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.011231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-k7qcm" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.011283 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081837 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183769 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.185644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.190052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.190515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.208903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.338964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372667 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372754 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372827 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.373748 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.373826 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" gracePeriod=600 Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.756939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.994937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"71cfd4bd5dd7b8ef13581fb394f135c9502646ef56e20b94b7cf463404dc6758"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998195 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" exitCode=0 Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998499 4739 scope.go:117] "RemoveContainer" containerID="3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.018841 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"c38be43b5695bcf43fbf84bf0ed166fb90f88397606d0ac205e19aec2e5eab1d"} Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.019525 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"242c926cf29682743ea13819c13856a2f7970236a671ddd67031f3418716a76f"} Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.038485 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" podStartSLOduration=2.486450472 podStartE2EDuration="5.038466365s" podCreationTimestamp="2026-02-18 14:05:58 +0000 UTC" firstStartedPulling="2026-02-18 14:05:59.766108725 +0000 UTC m=+392.261829647" lastFinishedPulling="2026-02-18 14:06:02.318124618 +0000 UTC m=+394.813845540" observedRunningTime="2026-02-18 14:06:03.033716306 +0000 UTC m=+395.529437268" watchObservedRunningTime="2026-02-18 14:06:03.038466365 +0000 UTC m=+395.534187297" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.620590 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.705228 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.342326 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.343784 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346638 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4vvt" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346629 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.355692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.373574 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.374967 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.379506 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-mzkwp" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.399209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.409230 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-2r9b6"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.410769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.413077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.413310 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.414429 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-8hvgw" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.441037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500668 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500871 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.502663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.507986 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.520066 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.520490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.601927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602831 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603937 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: E0218 14:06:05.604021 4739 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 18 14:06:05 crc kubenswrapper[4739]: E0218 14:06:05.604129 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls podName:b380310c-1045-470c-a5c7-25b4357c11c7 nodeName:}" failed. No retries permitted until 2026-02-18 14:06:06.10410169 +0000 UTC m=+398.599822612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-gp8q7" (UID: "b380310c-1045-470c-a5c7-25b4357c11c7") : secret "kube-state-metrics-tls" not found Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604853 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.605249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.607974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.609889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.610424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.617958 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.624341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.660519 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.731791 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.036633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"5e83b1ff5684d5eaf38ce92f7c04c64e4d742069fcfa624eea949d96a539896e"} Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.115922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.122872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.153302 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.292694 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.497552 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.503338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wncft" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509816 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509946 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510062 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510560 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.519371 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.521426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627723 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627821 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627994 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729460 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.731596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.732912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.733433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.734701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.735586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.737357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.737699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.738082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.748406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.751542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.752953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.753644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.833242 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.868933 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:07 crc kubenswrapper[4739]: W0218 14:06:07.046432 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb380310c_1045_470c_a5c7_25b4357c11c7.slice/crio-e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2 WatchSource:0}: Error finding container e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2: Status 404 returned error can't find the container with id e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2 Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049603 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"9dfd0fa5a32597e0b3004f34a635e107330f11d3131a44c6a358eddc9cb61ff0"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"0ff860a5f49e7f1e918f8b4e1acd640479249b5853f6a2e3469d22db9752f90c"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"107df11e96f7c64c8e78dd655d3add85c1c75a7371fa9048b22ad0ec3127551c"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.465702 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.468175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471315 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-rvww4" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8pgifqrph5csl" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471886 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.472074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.473613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.480492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.494160 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:07 crc kubenswrapper[4739]: W0218 14:06:07.524516 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23577b5e_feaf_46c2_973a_8aea75a6dbe0.slice/crio-5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83 WatchSource:0}: Error finding container 5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83: Status 404 returned error can't find the container with id 5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83 Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547893 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548049 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548150 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649538 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649563 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.650652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.656234 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.657825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.670114 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.675138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.790028 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.061849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.064546 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc" containerID="fe60499bb2da428f352809855a064c61a9d0e14436f7f4ef2376c634e3d8b38b" exitCode=0 Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.064579 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerDied","Data":"fe60499bb2da428f352809855a064c61a9d0e14436f7f4ef2376c634e3d8b38b"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.065666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.434545 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:08 crc kubenswrapper[4739]: W0218 14:06:08.716865 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8f90ea_5539_40b0_ba4b_8b4465eae2dd.slice/crio-0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95 WatchSource:0}: Error finding container 0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95: Status 404 returned error can't find the container with id 0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95 Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.081186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"3c35f3f4b0768115712809cb0d172cc479ba567d2829b39b440076c4127765f7"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.081275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"c3d4df0fc85b60e06ddab7dfdb03e906332cd5f2cc72ab85cf4c81f27a2d4f9a"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.083736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.093314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"4435d03814348fab5a9d4afd861266e4966bd64b11596e3129851d5321c330ed"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.104741 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-2r9b6" podStartSLOduration=2.76643981 podStartE2EDuration="4.104720808s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:05.799963206 +0000 UTC m=+398.295684128" lastFinishedPulling="2026-02-18 14:06:07.138244204 +0000 UTC m=+399.633965126" observedRunningTime="2026-02-18 14:06:09.103410455 +0000 UTC m=+401.599131377" watchObservedRunningTime="2026-02-18 14:06:09.104720808 +0000 UTC m=+401.600441730" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127503 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"c2ab5ac8c968bd33ba909e3d0d133bc92c17571e653b510b3e189a83c2b3e89e"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"340f6b39a4eb6047108dab3f107a8d56499bc879335305388f13804d156c5d00"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"027a676ac40fbf4051d9a46d65791e5f47c216bb0f23ba25ae83e6122191cb43"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.143003 4739 generic.go:334] "Generic (PLEG): container finished" podID="23577b5e-feaf-46c2-973a-8aea75a6dbe0" containerID="a26df73dda58372eb291701cee940e2a7f28fa5d5cda0d1580136449df84b43e" exitCode=0 Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.143135 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerDied","Data":"a26df73dda58372eb291701cee940e2a7f28fa5d5cda0d1580136449df84b43e"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.198745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" podStartSLOduration=3.66774244 podStartE2EDuration="5.198714305s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:06.530150239 +0000 UTC m=+399.025871161" lastFinishedPulling="2026-02-18 14:06:08.061122114 +0000 UTC m=+400.556843026" observedRunningTime="2026-02-18 14:06:09.123045367 +0000 UTC m=+401.618766309" watchObservedRunningTime="2026-02-18 14:06:10.198714305 +0000 UTC m=+402.694435227" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.199038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" podStartSLOduration=3.139465146 podStartE2EDuration="5.199033513s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:07.049297385 +0000 UTC m=+399.545018307" lastFinishedPulling="2026-02-18 14:06:09.108865752 +0000 UTC m=+401.604586674" observedRunningTime="2026-02-18 14:06:10.151787649 +0000 UTC m=+402.647508571" watchObservedRunningTime="2026-02-18 14:06:10.199033513 +0000 UTC m=+402.694754435" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.205487 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.206302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.219025 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317795 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317826 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318078 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419743 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.421702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.423875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.425675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.439337 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.530511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.785997 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.787011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.790675 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.790949 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791028 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-k4d7v" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791143 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2mqmnq5hghn7e" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791381 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.807373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.927884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928333 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928432 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.929912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.930349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.930472 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.934129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.939333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.944136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.947073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.003221 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.109594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.173356 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.174349 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.180995 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.183736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.187702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.232656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.335402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.340904 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.500118 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.716253 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.718231 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.725966 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.731838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.731995 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-g7bj36vt2qou" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732509 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732585 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bbn9z" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.733312 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.734563 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745361 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745421 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745475 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745565 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745602 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750194 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750568 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750634 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.756909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.861078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.861728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.864161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.865776 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.867342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.867838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.868581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.868939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.869644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.870184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.872951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.873015 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.873438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.885710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.887169 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.973622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.050369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.178374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerStarted","Data":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.179569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerStarted","Data":"be64644632065d655e0cde5e224a8ff692c5d059479e399bec230d76053c2d58"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.184420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"adb22130c24c318c77b58c719bdb88ab59a69125673e1949ad3756f934a71718"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.184468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"6a8a8672a915148dca3df8994d95e9107301dab90053a1361fa94db7214cf5e5"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.187123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"aa73099c96eeea9d12f2627be1ade2aa384673568666597342772c94f672b008"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.200000 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58d7d9b477-pcf5b" podStartSLOduration=2.199981471 podStartE2EDuration="2.199981471s" podCreationTimestamp="2026-02-18 14:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:06:12.197477909 +0000 UTC m=+404.693198861" watchObservedRunningTime="2026-02-18 14:06:12.199981471 +0000 UTC m=+404.695702403" Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.228091 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.489913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:12 crc kubenswrapper[4739]: W0218 14:06:12.500372 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22142e4b_3aae_4317_a2e5_2ad225fb7473.slice/crio-66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592 WatchSource:0}: Error finding container 66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592: Status 404 returned error can't find the container with id 66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592 Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195314 4739 generic.go:334] "Generic (PLEG): container finished" podID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerID="4ad90f7d33e0eefe30e1cc97c8efb390ac3860abd15d101f1750012f570a18cd" exitCode=0 Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195488 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerDied","Data":"4ad90f7d33e0eefe30e1cc97c8efb390ac3860abd15d101f1750012f570a18cd"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.209435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"49e8291bc5dc74ad5e84afc82a29ebc4a561079c55eb42533d8af56a03b4b9fe"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.210796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" event={"ID":"34c89fd8-2d23-4587-a802-4c07ad76bcd7","Type":"ContainerStarted","Data":"61fb2e7a8265abc8a0269610f87de8bb97446fd356fa20de632ecd0d4ed3b102"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.221879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225650 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"d7d819a75ff16ae5fd1f51c469e10e3f03490067108b49ce9933d4320a1f1563"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"f293877fc2776f27dd857aabf840a661a55e7c2af8bf7dfa8f951d3c0b01263d"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"ff1b70884dec11ac788c995609cd5beebf00c9550a577182ae4459b42878b89e"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"dce215bcb597ebaa24774149cd0da8065088fcb0feb466d3c85c8d87c4c2142f"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"073271a4ef8cb53d82714dc7e915376681272babf13d94cf4df8a438d58aba8d"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"0c6e44b84e26e4b9c9521c432a29b331b099ef15fc2f7676b00406d68c3c71c0"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"1fcc315e5cb158fb4d26e7f06f27b9e1172813ead3e36a1d800348ed252007d7"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"c6b9b3933a5aa2fb26295fc546890c7b0b185d91c249c634977d990d526473c8"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"23c5a95dc2b11d76ac644f4a0c724d3d25eaa3b45bf2335f9508f95756a8cb89"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.235030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" event={"ID":"34c89fd8-2d23-4587-a802-4c07ad76bcd7","Type":"ContainerStarted","Data":"e30d333b2583ca5048cb59beefd346a122b6f9759bd6ef0a566af1a13b37d8d9"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.235558 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.243093 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podStartSLOduration=2.8170251 podStartE2EDuration="5.243081618s" podCreationTimestamp="2026-02-18 14:06:10 +0000 UTC" firstStartedPulling="2026-02-18 14:06:11.981107118 +0000 UTC m=+404.476828040" lastFinishedPulling="2026-02-18 14:06:14.407163636 +0000 UTC m=+406.902884558" observedRunningTime="2026-02-18 14:06:15.241381935 +0000 UTC m=+407.737102857" watchObservedRunningTime="2026-02-18 14:06:15.243081618 +0000 UTC m=+407.738802540" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.246940 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.261654 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podStartSLOduration=2.151216432 podStartE2EDuration="4.261631813s" podCreationTimestamp="2026-02-18 14:06:11 +0000 UTC" firstStartedPulling="2026-02-18 14:06:12.254174598 +0000 UTC m=+404.749895520" lastFinishedPulling="2026-02-18 14:06:14.364589979 +0000 UTC m=+406.860310901" observedRunningTime="2026-02-18 14:06:15.256646488 +0000 UTC m=+407.752367440" watchObservedRunningTime="2026-02-18 14:06:15.261631813 +0000 UTC m=+407.757352745" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.298730 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.461395193 podStartE2EDuration="9.298714832s" podCreationTimestamp="2026-02-18 14:06:06 +0000 UTC" firstStartedPulling="2026-02-18 14:06:07.527386603 +0000 UTC m=+400.023107535" lastFinishedPulling="2026-02-18 14:06:14.364706232 +0000 UTC m=+406.860427174" observedRunningTime="2026-02-18 14:06:15.293403109 +0000 UTC m=+407.789124041" watchObservedRunningTime="2026-02-18 14:06:15.298714832 +0000 UTC m=+407.794435764" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.339863 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podStartSLOduration=2.649585289 podStartE2EDuration="8.339105024s" podCreationTimestamp="2026-02-18 14:06:07 +0000 UTC" firstStartedPulling="2026-02-18 14:06:08.725793505 +0000 UTC m=+401.221514427" lastFinishedPulling="2026-02-18 14:06:14.41531324 +0000 UTC m=+406.911034162" observedRunningTime="2026-02-18 14:06:15.327471832 +0000 UTC m=+407.823192744" watchObservedRunningTime="2026-02-18 14:06:15.339105024 +0000 UTC m=+407.834825956" Feb 18 14:06:17 crc kubenswrapper[4739]: I0218 14:06:17.269111 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"6965db354ab966e13601a58de0203f89563ca80f5969237fb79d53cec016183d"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"8ac388f60ad587f76bc829b8147d223d8d2754cf343adbdbbd18054eb2a8cfd9"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259375 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"e64739d4eeff90f8dd89979ea950c5c58ba3adc6ba05687ccaaead8cc5dfd928"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"f95a56436fb67b08067185ab8a6e5fc004c22bad4e1d1da23f657959c48fad41"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259403 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"01164903cded0adf0fd45394d4abd75818beaa631631ae3bc0adb5fa40229910"} Feb 18 14:06:19 crc kubenswrapper[4739]: I0218 14:06:19.272782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"33a653ef95267e3f16fa03f490e52fffaf95421b7b4abba12f9f5311f7e0aacd"} Feb 18 14:06:19 crc kubenswrapper[4739]: I0218 14:06:19.335106 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.975297029 podStartE2EDuration="8.335080662s" podCreationTimestamp="2026-02-18 14:06:11 +0000 UTC" firstStartedPulling="2026-02-18 14:06:13.198505716 +0000 UTC m=+405.694226638" lastFinishedPulling="2026-02-18 14:06:17.558289349 +0000 UTC m=+410.054010271" observedRunningTime="2026-02-18 14:06:19.329069341 +0000 UTC m=+411.824790303" watchObservedRunningTime="2026-02-18 14:06:19.335080662 +0000 UTC m=+411.830801614" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.531083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.531611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.536552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:21 crc kubenswrapper[4739]: I0218 14:06:21.292961 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:21 crc kubenswrapper[4739]: I0218 14:06:21.345582 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:22 crc kubenswrapper[4739]: I0218 14:06:22.051526 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:28 crc kubenswrapper[4739]: I0218 14:06:28.761213 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" containerID="cri-o://c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" gracePeriod=30 Feb 18 14:06:29 crc kubenswrapper[4739]: I0218 14:06:29.340735 4739 generic.go:334] "Generic (PLEG): container finished" podID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerID="c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" exitCode=0 Feb 18 14:06:29 crc kubenswrapper[4739]: I0218 14:06:29.340810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerDied","Data":"c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92"} Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.073611 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.110310 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.111136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.155751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156541 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156648 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.157345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.157990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.159406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.162085 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.163179 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.163639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc" (OuterVolumeSpecName: "kube-api-access-lr8zc") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "kube-api-access-lr8zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.172120 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.174943 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.189082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259089 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259151 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259159 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259168 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259175 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259184 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360214 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360285 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerDied","Data":"b96e22f2e4072131e39645eec1bdeb575f2e322af330e9ccff4e59c7655f9d27"} Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360331 4739 scope.go:117] "RemoveContainer" containerID="c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.400792 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.411892 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:32 crc kubenswrapper[4739]: I0218 14:06:32.424374 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" path="/var/lib/kubelet/pods/42c00254-0b69-45d3-8dd6-7f2ee914d65d/volumes" Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.418018 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r2dqq" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" containerID="cri-o://e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" gracePeriod=15 Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.924373 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r2dqq_dcd69695-49d3-46a8-9981-b592c44e827e/console/0.log" Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.924722 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005502 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca" (OuterVolumeSpecName: "service-ca") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006827 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config" (OuterVolumeSpecName: "console-config") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.011374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.011567 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt" (OuterVolumeSpecName: "kube-api-access-fvpnt") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "kube-api-access-fvpnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.012076 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107604 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107640 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107655 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107666 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107676 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107684 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496040 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r2dqq_dcd69695-49d3-46a8-9981-b592c44e827e/console/0.log" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496141 4739 generic.go:334] "Generic (PLEG): container finished" podID="dcd69695-49d3-46a8-9981-b592c44e827e" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" exitCode=2 Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerDied","Data":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerDied","Data":"521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a"} Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496292 4739 scope.go:117] "RemoveContainer" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.535124 4739 scope.go:117] "RemoveContainer" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: E0218 14:06:47.536170 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": container with ID starting with e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27 not found: ID does not exist" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.536227 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} err="failed to get container status \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": rpc error: code = NotFound desc = could not find container \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": container with ID starting with e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27 not found: ID does not exist" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.540759 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.549330 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:48 crc kubenswrapper[4739]: I0218 14:06:48.423532 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" path="/var/lib/kubelet/pods/dcd69695-49d3-46a8-9981-b592c44e827e/volumes" Feb 18 14:06:51 crc kubenswrapper[4739]: I0218 14:06:51.117393 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:51 crc kubenswrapper[4739]: I0218 14:06:51.126749 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.051249 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.089752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.760022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.792052 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:49 crc kubenswrapper[4739]: E0218 14:07:49.792948 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.792969 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: E0218 14:07:49.793002 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793157 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793185 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793747 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.805215 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981507 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982174 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082930 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.084804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.089709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.090627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.117635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.412668 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.606853 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.993853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerStarted","Data":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.993953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerStarted","Data":"ced41aeb18b143d7cb7b37389d8e7093c6f932a8b69ee8fd71755fd592dcd4fa"} Feb 18 14:07:51 crc kubenswrapper[4739]: I0218 14:07:51.018666 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-796648847c-cwj5j" podStartSLOduration=2.018635085 podStartE2EDuration="2.018635085s" podCreationTimestamp="2026-02-18 14:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:07:51.012141463 +0000 UTC m=+503.507862465" watchObservedRunningTime="2026-02-18 14:07:51.018635085 +0000 UTC m=+503.514356077" Feb 18 14:07:59 crc kubenswrapper[4739]: I0218 14:07:59.373237 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:07:59 crc kubenswrapper[4739]: I0218 14:07:59.373952 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.420871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.421197 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.421306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.425365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.516729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.574175 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58d7d9b477-pcf5b" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" containerID="cri-o://80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" gracePeriod=15 Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.957609 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d7d9b477-pcf5b_86a3de80-d2f2-4637-bebb-5944c22a2c83/console/0.log" Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.957973 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047556 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047647 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047855 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047943 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.048197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.048363 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca" (OuterVolumeSpecName: "service-ca") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config" (OuterVolumeSpecName: "console-config") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.053692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5" (OuterVolumeSpecName: "kube-api-access-b6dt5") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "kube-api-access-b6dt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.054596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.054647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150206 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150258 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150282 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150303 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150322 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150340 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251782 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d7d9b477-pcf5b_86a3de80-d2f2-4637-bebb-5944c22a2c83/console/0.log" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251834 4739 generic.go:334] "Generic (PLEG): container finished" podID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" exitCode=2 Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerDied","Data":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251887 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerDied","Data":"be64644632065d655e0cde5e224a8ff692c5d059479e399bec230d76053c2d58"} Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251905 4739 scope.go:117] "RemoveContainer" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251945 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.283875 4739 scope.go:117] "RemoveContainer" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: E0218 14:08:26.285236 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": container with ID starting with 80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0 not found: ID does not exist" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.285333 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} err="failed to get container status \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": rpc error: code = NotFound desc = could not find container \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": container with ID starting with 80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0 not found: ID does not exist" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.301853 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.305588 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.419996 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" path="/var/lib/kubelet/pods/86a3de80-d2f2-4637-bebb-5944c22a2c83/volumes" Feb 18 14:08:28 crc kubenswrapper[4739]: I0218 14:08:28.615205 4739 scope.go:117] "RemoveContainer" containerID="22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a" Feb 18 14:08:29 crc kubenswrapper[4739]: I0218 14:08:29.373153 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:08:29 crc kubenswrapper[4739]: I0218 14:08:29.373595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.372557 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.373251 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.373312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.376669 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.376968 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" gracePeriod=600 Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505055 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" exitCode=0 Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505647 4739 scope.go:117] "RemoveContainer" containerID="c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.364254 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:07 crc kubenswrapper[4739]: E0218 14:09:07.365141 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.365160 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.365297 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.366287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.368989 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.383815 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511549 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.512046 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.531729 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.682337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.897927 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557661 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="b45866b485a873e533217c5609dff01b7a1fbda5b6dd344d2f3f11bef95be4df" exitCode=0 Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557733 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"b45866b485a873e533217c5609dff01b7a1fbda5b6dd344d2f3f11bef95be4df"} Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557798 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerStarted","Data":"41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06"} Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.559870 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:09:10 crc kubenswrapper[4739]: I0218 14:09:10.571240 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="2271379a199a89e7ff76a4a76d9c723a989b4feb61f0a0f5f17a7ee8b6115e19" exitCode=0 Feb 18 14:09:10 crc kubenswrapper[4739]: I0218 14:09:10.571518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"2271379a199a89e7ff76a4a76d9c723a989b4feb61f0a0f5f17a7ee8b6115e19"} Feb 18 14:09:11 crc kubenswrapper[4739]: I0218 14:09:11.584218 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="bf2d0b5b32f74e0b202e14eff82aa0195b1a6152ef8b92136c9f5d68b3ee0774" exitCode=0 Feb 18 14:09:11 crc kubenswrapper[4739]: I0218 14:09:11.584275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"bf2d0b5b32f74e0b202e14eff82aa0195b1a6152ef8b92136c9f5d68b3ee0774"} Feb 18 14:09:12 crc kubenswrapper[4739]: I0218 14:09:12.885673 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004721 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.010635 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp" (OuterVolumeSpecName: "kube-api-access-cn8wp") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "kube-api-access-cn8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.010930 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle" (OuterVolumeSpecName: "bundle") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.018572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util" (OuterVolumeSpecName: "util") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.105808 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.106084 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.106096 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601882 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06"} Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601952 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601994 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.932591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933387 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" containerID="cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933681 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" containerID="cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933690 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" containerID="cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933742 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" containerID="cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933751 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" containerID="cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933712 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" containerID="cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933814 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.966763 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" containerID="cri-o://54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" gracePeriod=30 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.645291 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.648376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.648994 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649545 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649573 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649582 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649590 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649599 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" exitCode=143 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649607 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" exitCode=143 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649721 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649739 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652010 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652515 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652558 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" exitCode=2 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.653048 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:19 crc kubenswrapper[4739]: E0218 14:09:19.653267 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.677766 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219138 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219910 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.274813 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njz85"] Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275179 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275223 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275238 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275253 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275266 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275283 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="util" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275295 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="util" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275310 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="pull" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275322 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="pull" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275340 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275351 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275376 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275389 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275411 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275423 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275436 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275452 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275493 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275506 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275523 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kubecfg-setup" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kubecfg-setup" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275563 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275595 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275608 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275620 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275638 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275651 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275831 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275862 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275877 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275897 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275911 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275923 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275937 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275953 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275971 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275993 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276011 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276027 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.276239 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276261 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276428 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.279054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403147 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403309 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403329 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403347 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403361 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403414 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403471 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403503 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403619 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403739 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403757 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403874 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket" (OuterVolumeSpecName: "log-socket") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404097 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404113 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405116 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log" (OuterVolumeSpecName: "node-log") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash" (OuterVolumeSpecName: "host-slash") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405263 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.409893 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n" (OuterVolumeSpecName: "kube-api-access-dtd5n") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "kube-api-access-dtd5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.419349 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.419704 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504996 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505340 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505429 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505664 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506205 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506242 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506621 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506860 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506893 4739 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507080 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507142 4739 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507202 4739 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507261 4739 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507368 4739 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507423 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507491 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507556 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507608 4739 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507662 4739 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507715 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507767 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507819 4739 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507873 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507924 4739 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507973 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.508025 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.508077 4739 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.511038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.523294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.590801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.700342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"3f5bb4b788270d83cf1ae7e041c7cf11a02a5fd2aa5c9b8f5840253f4687d109"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.705396 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706035 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706630 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" exitCode=0 Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706713 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" exitCode=0 Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706859 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707370 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707459 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707483 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.715603 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.739771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.740673 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.741397 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.743757 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.743919 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.758325 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.758855 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qwkkp" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.780418 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.792822 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.798384 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.813692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.827613 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.856171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.864154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.868763 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4rltn" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.868965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.870564 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.871658 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.893920 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.909704 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.914978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.928572 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.935350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.954628 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.970590 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.970961 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971002 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} err="failed to get container status \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971036 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.971311 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971340 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} err="failed to get container status \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971360 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971851 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.972526 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.973757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.973796 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} err="failed to get container status \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.973856 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.974130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974161 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} err="failed to get container status \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974178 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.974420 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} err="failed to get container status \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974506 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974929 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-z95ts" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975169 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975193 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} err="failed to get container status \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975208 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975598 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975618 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} err="failed to get container status \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975630 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975850 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975870 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} err="failed to get container status \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975881 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.976074 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976101 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} err="failed to get container status \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976118 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976310 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} err="failed to get container status \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976326 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976563 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} err="failed to get container status \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976591 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976768 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} err="failed to get container status \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976785 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977027 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} err="failed to get container status \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977047 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} err="failed to get container status \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977349 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977606 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} err="failed to get container status \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977626 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977813 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} err="failed to get container status \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977830 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978080 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} err="failed to get container status \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978101 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978294 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} err="failed to get container status \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015734 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018686 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.060333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.061109 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.065442 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-k7x6s" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.087990 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117489 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120556 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120785 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120823 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120889 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.122157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.137281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.209692 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.218475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.218600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.219707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.228045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234517 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234604 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234626 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234670 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.236101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275503 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275621 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275649 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275725 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.292746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313028 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313083 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313105 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313148 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.481295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510251 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510631 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510686 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510731 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.724909 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerID="63139a00520ccb495ae7aeb05b4ec94cbc4f0702737ff09ce59721f657efee35" exitCode=0 Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.725118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerDied","Data":"63139a00520ccb495ae7aeb05b4ec94cbc4f0702737ff09ce59721f657efee35"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.419058 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" path="/var/lib/kubelet/pods/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/volumes" Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"65d2fab975fca85e33a1bd10769b030be3b635df185632f3c2c951c0583f2071"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"5573dd0cbf8bff48419d39b0da563d531642df77e89e7eb6890ad393d1e1f695"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736773 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"85e1b3d352c8763e335723cf2f2fb986e3c5cbaee36472135cad3b3ef5a339f8"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"bc3d244581c25b68aa9399475fd20be97da0ca767ceeb76714d0a9d6aaf6bff4"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"2a70e718a39cb88b5ae82e8d4003a1f04906d0c1143ac825bcec0ef96dbf1451"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"2c3fb374c31063e49b3fb92b705754f567c66974927fe56f22d53f9bf399f656"} Feb 18 14:09:24 crc kubenswrapper[4739]: I0218 14:09:24.753508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"feabe2a78254db093534d4eae996c0b083567faaf789ca5e4af8127006774819"} Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.744634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.745358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.745934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.749959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.750103 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.750730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.770688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.770818 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.771328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.778851 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.778973 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.779418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.790436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"c1fc48d05342165ee0f4db047aae45eb1984ae0609a6eb9066c46db384e7972d"} Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791741 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791778 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791935 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791972 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791997 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.792040 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.835917 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836239 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836262 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836301 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.837118 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podStartSLOduration=7.837100194 podStartE2EDuration="7.837100194s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:09:27.83615779 +0000 UTC m=+600.331878732" watchObservedRunningTime="2026-02-18 14:09:27.837100194 +0000 UTC m=+600.332821106" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.846739 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.852995 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859084 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859138 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859178 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867643 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867723 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867752 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867800 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881625 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881685 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881707 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:28 crc kubenswrapper[4739]: I0218 14:09:28.661028 4739 scope.go:117] "RemoveContainer" containerID="4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc" Feb 18 14:09:28 crc kubenswrapper[4739]: I0218 14:09:28.684817 4739 scope.go:117] "RemoveContainer" containerID="b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" Feb 18 14:09:32 crc kubenswrapper[4739]: I0218 14:09:32.410946 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:32 crc kubenswrapper[4739]: E0218 14:09:32.411406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410784 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.411252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.411538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.471515 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472396 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472609 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472803 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480667 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480728 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480750 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480793 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486243 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486306 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486330 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.409680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462753 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462837 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462863 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470844 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470943 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470984 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.471070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.410106 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.875543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.875842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"3624dca3884a0e7f68dae865e9e5bdd570950f415bd75d4d1b9e008103284e71"} Feb 18 14:09:50 crc kubenswrapper[4739]: I0218 14:09:50.618133 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.410421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.411483 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.673235 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:52 crc kubenswrapper[4739]: W0218 14:09:52.696688 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0348c042_11c0_4a27_a8d4_04beea8e11a3.slice/crio-d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b WatchSource:0}: Error finding container d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b: Status 404 returned error can't find the container with id d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.930931 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b"} Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.412065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.413078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.616021 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.943806 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" event={"ID":"3d337f75-bb26-461d-9519-f17c333cfc55","Type":"ContainerStarted","Data":"1257cfb8743d893ac8a100f8f8ecec53b7388a050916c37a5de6b793fb6d0158"} Feb 18 14:09:55 crc kubenswrapper[4739]: I0218 14:09:55.410127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:55 crc kubenswrapper[4739]: I0218 14:09:55.410661 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.409991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.410210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.410985 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.411175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.705961 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.038852 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode257eada_747c_4c16_ade0_64120ce08e5b.slice/crio-2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b WatchSource:0}: Error finding container 2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b: Status 404 returned error can't find the container with id 2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.526328 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.538126 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a79887e_1b6d_44ed_b3e1_f1c7c65b48fe.slice/crio-22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc WatchSource:0}: Error finding container 22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc: Status 404 returned error can't find the container with id 22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.765528 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.771805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4587aa_49cd_4fd3_a5e6_05b0b5139cbc.slice/crio-b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd WatchSource:0}: Error finding container b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd: Status 404 returned error can't find the container with id b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.981533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" event={"ID":"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc","Type":"ContainerStarted","Data":"b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.983988 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" event={"ID":"3d337f75-bb26-461d-9519-f17c333cfc55","Type":"ContainerStarted","Data":"867a1d1e35e96f2de0846410e776a8707b5f70b60e12991ebf4a39c25a659674"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.985652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" event={"ID":"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe","Type":"ContainerStarted","Data":"22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.987779 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" event={"ID":"e257eada-747c-4c16-ade0-64120ce08e5b","Type":"ContainerStarted","Data":"1976a300df6104a78e1c3fc23c067d495200b7e4dda5fded82016791e4d53d0a"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.987806 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" event={"ID":"e257eada-747c-4c16-ade0-64120ce08e5b","Type":"ContainerStarted","Data":"2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.989145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.989414 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.992364 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.002818 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podStartSLOduration=35.355427238 podStartE2EDuration="40.002796284s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:54.626910742 +0000 UTC m=+627.122631664" lastFinishedPulling="2026-02-18 14:09:59.274279778 +0000 UTC m=+631.770000710" observedRunningTime="2026-02-18 14:10:00.000118556 +0000 UTC m=+632.495839488" watchObservedRunningTime="2026-02-18 14:10:00.002796284 +0000 UTC m=+632.498517206" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.034618 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podStartSLOduration=33.455022331 podStartE2EDuration="40.03457471s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:52.698575286 +0000 UTC m=+625.194296218" lastFinishedPulling="2026-02-18 14:09:59.278127675 +0000 UTC m=+631.773848597" observedRunningTime="2026-02-18 14:10:00.024048916 +0000 UTC m=+632.519769828" watchObservedRunningTime="2026-02-18 14:10:00.03457471 +0000 UTC m=+632.530295642" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.094616 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podStartSLOduration=39.329611275 podStartE2EDuration="40.094596435s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.046583039 +0000 UTC m=+631.542304001" lastFinishedPulling="2026-02-18 14:09:59.811568239 +0000 UTC m=+632.307289161" observedRunningTime="2026-02-18 14:10:00.084944433 +0000 UTC m=+632.580665375" watchObservedRunningTime="2026-02-18 14:10:00.094596435 +0000 UTC m=+632.590317367" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.028037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" event={"ID":"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc","Type":"ContainerStarted","Data":"ffbf001d0c53a44567dce50cda8fd6397bcd2dc12b09ba9b03b313a22e2ec453"} Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.031967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" event={"ID":"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe","Type":"ContainerStarted","Data":"5937df856d1d46847539665e65a3d6d8ab68c8c20f66dc465922025398c42662"} Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.032158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.061619 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podStartSLOduration=40.640615494 podStartE2EDuration="43.061560571s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.774675664 +0000 UTC m=+632.270396586" lastFinishedPulling="2026-02-18 14:10:02.195620741 +0000 UTC m=+634.691341663" observedRunningTime="2026-02-18 14:10:03.053730905 +0000 UTC m=+635.549451847" watchObservedRunningTime="2026-02-18 14:10:03.061560571 +0000 UTC m=+635.557281493" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.091121 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podStartSLOduration=39.437711008 podStartE2EDuration="42.091099642s" podCreationTimestamp="2026-02-18 14:09:21 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.540346859 +0000 UTC m=+632.036067781" lastFinishedPulling="2026-02-18 14:10:02.193735493 +0000 UTC m=+634.689456415" observedRunningTime="2026-02-18 14:10:03.087767028 +0000 UTC m=+635.583487960" watchObservedRunningTime="2026-02-18 14:10:03.091099642 +0000 UTC m=+635.586820584" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.211426 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.212523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.221517 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.224436 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.229115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.229301 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4m87c" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.256595 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.257633 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.259841 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jt56x" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.264696 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.265696 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.267219 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wsx9r" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.270581 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.282537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.317678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.317740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421216 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.441859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.442799 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.522764 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.533383 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.540247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.578618 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.585131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.854193 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.966844 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.012974 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.055326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" event={"ID":"c9731232-5945-414d-bf7c-cd9207130675","Type":"ContainerStarted","Data":"e8ec9501c5e7763f5c3f27ab80dac6d138f48b683f62abbca0c8100d78544cbd"} Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.056524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" event={"ID":"09228bff-e02a-4a38-86ab-3d18492c3fa1","Type":"ContainerStarted","Data":"6fb5ad80aa567c9077b9b91bc5fe45863465870ed7866c56608c71c2238f40b3"} Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.058303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfgbz" event={"ID":"4a1588a0-096b-4e77-b251-f034a57c7a04","Type":"ContainerStarted","Data":"8a047c5d92b1fe401904b44746047144260d54c4c478b996c15d16a3109f6001"} Feb 18 14:10:11 crc kubenswrapper[4739]: I0218 14:10:11.485993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.121314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" event={"ID":"09228bff-e02a-4a38-86ab-3d18492c3fa1","Type":"ContainerStarted","Data":"c2275996e9f713a6c4de0d6ebd364787512e90c3d51c849d0ba8ffc2f4983898"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.125576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfgbz" event={"ID":"4a1588a0-096b-4e77-b251-f034a57c7a04","Type":"ContainerStarted","Data":"3d3c9533ad06560c7aaea5d94681fc805ee8303be163b909088c5ebdafba4680"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.134314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" event={"ID":"c9731232-5945-414d-bf7c-cd9207130675","Type":"ContainerStarted","Data":"16180aad5ff17f9442ca809b4bcdcc1d9cfba2a73e4951b86d5a99f948a79c0f"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.134683 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.138909 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" podStartSLOduration=1.809481468 podStartE2EDuration="6.138891892s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:07.021623996 +0000 UTC m=+639.517344918" lastFinishedPulling="2026-02-18 14:10:11.35103438 +0000 UTC m=+643.846755342" observedRunningTime="2026-02-18 14:10:12.135929718 +0000 UTC m=+644.631650640" watchObservedRunningTime="2026-02-18 14:10:12.138891892 +0000 UTC m=+644.634612814" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.155350 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podStartSLOduration=1.632294804 podStartE2EDuration="6.155327654s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:06.851183222 +0000 UTC m=+639.346904144" lastFinishedPulling="2026-02-18 14:10:11.374216072 +0000 UTC m=+643.869936994" observedRunningTime="2026-02-18 14:10:12.15114036 +0000 UTC m=+644.646861282" watchObservedRunningTime="2026-02-18 14:10:12.155327654 +0000 UTC m=+644.651048576" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.177572 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bfgbz" podStartSLOduration=1.791082047 podStartE2EDuration="6.177547912s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:06.973890829 +0000 UTC m=+639.469611751" lastFinishedPulling="2026-02-18 14:10:11.360356694 +0000 UTC m=+643.856077616" observedRunningTime="2026-02-18 14:10:12.172223168 +0000 UTC m=+644.667944110" watchObservedRunningTime="2026-02-18 14:10:12.177547912 +0000 UTC m=+644.673268844" Feb 18 14:10:16 crc kubenswrapper[4739]: I0218 14:10:16.582026 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.460359 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.462344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.466271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.516008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634572 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.635580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.635601 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.659266 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.660640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.673658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.675913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.779246 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836313 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.837043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.854750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.976387 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.021897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316054 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="454dd61bdd62d47407b56f447566ea9f22fe341c25dc7ed14dcd3d120b9b8069" exitCode=0 Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"454dd61bdd62d47407b56f447566ea9f22fe341c25dc7ed14dcd3d120b9b8069"} Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerStarted","Data":"7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b"} Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.340812 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:39 crc kubenswrapper[4739]: W0218 14:10:39.350487 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod517d6503_525a_420f_b4e7_1732df952bd4.slice/crio-cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4 WatchSource:0}: Error finding container cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4: Status 404 returned error can't find the container with id cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4 Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327307 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="92ccbd04e73a399a1b2acade5f0fc2fe3436deea52b89df59639b4cccf3974e0" exitCode=0 Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"92ccbd04e73a399a1b2acade5f0fc2fe3436deea52b89df59639b4cccf3974e0"} Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerStarted","Data":"cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4"} Feb 18 14:10:41 crc kubenswrapper[4739]: I0218 14:10:41.335787 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="f6ee98b5c21b1150f8da5d85e1aaf52dc0fcb1a34dee8bd3ae7600a84cb97958" exitCode=0 Feb 18 14:10:41 crc kubenswrapper[4739]: I0218 14:10:41.336119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"f6ee98b5c21b1150f8da5d85e1aaf52dc0fcb1a34dee8bd3ae7600a84cb97958"} Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.346863 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="33f9eaf41663c6cdc1fa6c161746dc2c97457c8e8624b5d58df79594ba4e8321" exitCode=0 Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.347013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"33f9eaf41663c6cdc1fa6c161746dc2c97457c8e8624b5d58df79594ba4e8321"} Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.349022 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="35bd051d69fe2a278c91886fad39204d35c0233eac46781159cd57033adb0c4b" exitCode=0 Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.349079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"35bd051d69fe2a278c91886fad39204d35c0233eac46781159cd57033adb0c4b"} Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.361093 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="0c9676f5b9f1ebc76195364908897f7a73a2564143e65de0de125703c8cdc208" exitCode=0 Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.361195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"0c9676f5b9f1ebc76195364908897f7a73a2564143e65de0de125703c8cdc208"} Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.584126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.706932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707055 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle" (OuterVolumeSpecName: "bundle") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.716684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz" (OuterVolumeSpecName: "kube-api-access-pldlz") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "kube-api-access-pldlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.721360 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util" (OuterVolumeSpecName: "util") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809214 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809254 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809272 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b"} Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370595 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.624783 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825594 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.826970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle" (OuterVolumeSpecName: "bundle") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.834546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs" (OuterVolumeSpecName: "kube-api-access-xfcqs") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "kube-api-access-xfcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.928038 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.928118 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.305294 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util" (OuterVolumeSpecName: "util") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.332328 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4"} Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380961 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380373 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.302862 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303747 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303764 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303785 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303794 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303819 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303828 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303841 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303859 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303866 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303880 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303888 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304026 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304051 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.308384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zst2v" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.309697 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.313353 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.323200 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.477436 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.579265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.598738 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.630585 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.840308 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:49 crc kubenswrapper[4739]: I0218 14:10:49.402608 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" event={"ID":"4b0da132-982d-47b8-ae8a-d0529fbfe6a4","Type":"ContainerStarted","Data":"42175b195f358ba914182304ce6e0ebffb25d3923adf31dee1bd3f7a30ecb776"} Feb 18 14:10:55 crc kubenswrapper[4739]: I0218 14:10:55.466310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" event={"ID":"4b0da132-982d-47b8-ae8a-d0529fbfe6a4","Type":"ContainerStarted","Data":"bf4ea001ea1dd847baae03b4ae85e964ad985d7b1ab8c3f7b8c94526d33c5d60"} Feb 18 14:10:59 crc kubenswrapper[4739]: I0218 14:10:59.373335 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:10:59 crc kubenswrapper[4739]: I0218 14:10:59.373710 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.025331 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" podStartSLOduration=6.58989109 podStartE2EDuration="12.02530912s" podCreationTimestamp="2026-02-18 14:10:48 +0000 UTC" firstStartedPulling="2026-02-18 14:10:48.858362043 +0000 UTC m=+681.354082965" lastFinishedPulling="2026-02-18 14:10:54.293780073 +0000 UTC m=+686.789500995" observedRunningTime="2026-02-18 14:10:55.506698557 +0000 UTC m=+688.002419479" watchObservedRunningTime="2026-02-18 14:11:00.02530912 +0000 UTC m=+692.521030062" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.029377 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.030720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.033905 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.038954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039200 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039212 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.041782 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7w974" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.048280 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155485 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155514 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.257913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.265456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.268978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.273113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.291312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.349358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.616615 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: W0218 14:11:00.633565 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4091e4df_be25_4e94_bf12_7079a8ce9b5f.slice/crio-f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4 WatchSource:0}: Error finding container f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4: Status 404 returned error can't find the container with id f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4 Feb 18 14:11:01 crc kubenswrapper[4739]: I0218 14:11:01.511831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4"} Feb 18 14:11:03 crc kubenswrapper[4739]: I0218 14:11:03.528438 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad"} Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.561493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"0f5a58e0edf17e924bc5e9579db08cf06cfce905915b2baf102218a6b7254d1c"} Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.562085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.563642 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.579979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podStartSLOduration=0.971276919 podStartE2EDuration="8.579959872s" podCreationTimestamp="2026-02-18 14:11:00 +0000 UTC" firstStartedPulling="2026-02-18 14:11:00.635878906 +0000 UTC m=+693.131599828" lastFinishedPulling="2026-02-18 14:11:08.244561859 +0000 UTC m=+700.740282781" observedRunningTime="2026-02-18 14:11:08.577951151 +0000 UTC m=+701.073672083" watchObservedRunningTime="2026-02-18 14:11:08.579959872 +0000 UTC m=+701.075680804" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.946303 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.947798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.950270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-4jt7w" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.950303 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.951037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.957131 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.048670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.048773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.149848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.149956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.153993 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.154039 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1f6744619e96528fe550f20b0a6efc84d44207a81495198471d6a685eafc85c/globalmount\"" pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.169333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.185818 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.263888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.761587 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:11:12 crc kubenswrapper[4739]: W0218 14:11:12.762588 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b37d199_1cb8_410c_af45_c6a181f5a5fa.slice/crio-13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4 WatchSource:0}: Error finding container 13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4: Status 404 returned error can't find the container with id 13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4 Feb 18 14:11:13 crc kubenswrapper[4739]: I0218 14:11:13.594327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8b37d199-1cb8-410c-af45-c6a181f5a5fa","Type":"ContainerStarted","Data":"13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4"} Feb 18 14:11:16 crc kubenswrapper[4739]: I0218 14:11:16.618524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8b37d199-1cb8-410c-af45-c6a181f5a5fa","Type":"ContainerStarted","Data":"294dfc4ba6866c3948399e099df856aa7445e88fbe4a1b126bef321ebd56a7a7"} Feb 18 14:11:16 crc kubenswrapper[4739]: I0218 14:11:16.642104 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.888085358 podStartE2EDuration="7.642082118s" podCreationTimestamp="2026-02-18 14:11:09 +0000 UTC" firstStartedPulling="2026-02-18 14:11:12.764976197 +0000 UTC m=+705.260697119" lastFinishedPulling="2026-02-18 14:11:15.518972917 +0000 UTC m=+708.014693879" observedRunningTime="2026-02-18 14:11:16.636369655 +0000 UTC m=+709.132090587" watchObservedRunningTime="2026-02-18 14:11:16.642082118 +0000 UTC m=+709.137803050" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.829880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.831100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.833650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-76dw2" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835462 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835742 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835789 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.838914 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.874622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.916878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.916950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917183 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019814 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019866 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.020405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.021006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.021258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.025327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.029103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.058049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.099983 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.100831 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.106767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.107073 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.107265 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.118190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.148733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224223 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224252 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.254211 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.255027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.264068 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.264660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.269575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.328329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.328341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.332665 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.333219 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.353043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.368925 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.404060 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.405490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407467 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-vkjm2" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407685 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407982 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.408083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.413087 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.430149 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.431727 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.432287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.435506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.440595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.447616 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.459368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.473509 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533000 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.595889 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.636275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.636871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638380 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.639435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.639717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.641046 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.651204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.651404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.657243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.659956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.659977 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.731379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.739298 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.750920 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.849397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.045126 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.047376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.049983 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.050050 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.065471 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.092334 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.095647 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ad99a5_d1e9_44a4_bf58_b2085ac14b4b.slice/crio-4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a WatchSource:0}: Error finding container 4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a: Status 404 returned error can't find the container with id 4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.146840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147406 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.148432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.244879 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.245976 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.253934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.253985 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.255970 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256007 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d7302d0c57022864d95ac85d3cb8f35f2dea7518adab428ee5cc729a54e0531/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256154 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256199 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/feaa171f0b6cd5799412ead8e4699e27ef427e9053f38aab2316903ddf25c100/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.257098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.257708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.264343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.264996 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.269740 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.309311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.310384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.334710 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.336185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.340489 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.341369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.343344 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.343548 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.346427 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.434390 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.457542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.457859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458364 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458401 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458970 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459145 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.460248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.462645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.463273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.466527 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.466562 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4f570ac13e6df09a348188bf3c99db79ba6c613f72b2b42e103e60173cad3d99/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.476978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.494708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561515 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.565954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.569090 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.569724 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.571041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.576517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.577107 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.577145 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/428cbf6e2a5f0931e5d37164ac9f8d8b697e2569180b7a3024f745d84b571d37/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.589802 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.610605 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.634858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.655803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.669436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"15a21824e8b86ea716e5809907ecdbbaf9bdcc39c94a5ebb2a9ed68ceaa32dce"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.671159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" event={"ID":"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b","Type":"ContainerStarted","Data":"4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.672804 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" event={"ID":"3886312a-0449-43cc-b914-a4633b2c7e80","Type":"ContainerStarted","Data":"3f8b07b6c419042850f5d2c44ac297c9341d063362c7ebec6608864417e05afe"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.674073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"ab421dbd157c4995ee2ace6842f59330eb01ddd1add15ca5ab520079d40c2d32"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.676165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" event={"ID":"d2537052-1467-4892-afe4-cafbbdfbd645","Type":"ContainerStarted","Data":"3d57e17da6f6b63f4935ba3674cb6af753fcbd06b47234c7cf156e2844c22d0d"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.841475 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.855590 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfabc0be_78aa_4cf2_ae16_6d226b95be03.slice/crio-04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa WatchSource:0}: Error finding container 04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa: Status 404 returned error can't find the container with id 04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.876238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.883400 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cadd086_3e21_4dfc_9577_356fdcfe83c1.slice/crio-ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02 WatchSource:0}: Error finding container ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02: Status 404 returned error can't find the container with id ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02 Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.144055 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:24 crc kubenswrapper[4739]: W0218 14:11:24.147857 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13e1961_45de_4db2_a4cb_04c91c7b18ad.slice/crio-3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c WatchSource:0}: Error finding container 3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c: Status 404 returned error can't find the container with id 3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.688074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8cadd086-3e21-4dfc-9577-356fdcfe83c1","Type":"ContainerStarted","Data":"ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02"} Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.689842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d13e1961-45de-4db2-a4cb-04c91c7b18ad","Type":"ContainerStarted","Data":"3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c"} Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.690901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bfabc0be-78aa-4cf2-ae16-6d226b95be03","Type":"ContainerStarted","Data":"04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.718405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bfabc0be-78aa-4cf2-ae16-6d226b95be03","Type":"ContainerStarted","Data":"00f15253fceac7920379827392ab362285e548cac1b9d9ea99fd11eb8a1cd32e"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.719504 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.721524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" event={"ID":"3886312a-0449-43cc-b914-a4633b2c7e80","Type":"ContainerStarted","Data":"7381d1d23b8d64918b8e9f22e68927268dab429d1c85352847348957fce0e46a"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.721641 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.723499 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8cadd086-3e21-4dfc-9577-356fdcfe83c1","Type":"ContainerStarted","Data":"fd6665c203067679e160dad8384fb0adc38b55320b64f990ae2b1fe6368bb00a"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.723994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.724914 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d13e1961-45de-4db2-a4cb-04c91c7b18ad","Type":"ContainerStarted","Data":"c8016b50df8e7d5202238a1b97a3c4a719a6605afe6a0cfb8d168a1e6ddeb215"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.725602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.727012 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"f00db3955efbc3e250bd1c83a5d608b42978648715859357e9255dc3ec695a6f"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.728839 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" event={"ID":"d2537052-1467-4892-afe4-cafbbdfbd645","Type":"ContainerStarted","Data":"edadc01b8674abed17f814e13f5f06aa4b70cbd3b8b2ecdc0f076d0b2f9144cf"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.729007 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.730584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"b02d91b1e269c801a1c546132c606efaf1c5c70268928a72a09b5b15ae12b22d"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.732186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" event={"ID":"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b","Type":"ContainerStarted","Data":"1ddc7dc066063733f48c303a27576fdabb8b5830d73bf262400021ced70c8369"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.732358 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.742500 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.389066495 podStartE2EDuration="7.7424849s" podCreationTimestamp="2026-02-18 14:11:21 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.870009096 +0000 UTC m=+716.365730008" lastFinishedPulling="2026-02-18 14:11:28.223427491 +0000 UTC m=+720.719148413" observedRunningTime="2026-02-18 14:11:28.736806188 +0000 UTC m=+721.232527130" watchObservedRunningTime="2026-02-18 14:11:28.7424849 +0000 UTC m=+721.238205822" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.760521 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.6833621020000002 podStartE2EDuration="6.760506297s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:24.151134567 +0000 UTC m=+716.646855479" lastFinishedPulling="2026-02-18 14:11:28.228278752 +0000 UTC m=+720.723999674" observedRunningTime="2026-02-18 14:11:28.758761004 +0000 UTC m=+721.254481926" watchObservedRunningTime="2026-02-18 14:11:28.760506297 +0000 UTC m=+721.256227219" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.791582 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podStartSLOduration=2.306986597 podStartE2EDuration="7.791567068s" podCreationTimestamp="2026-02-18 14:11:21 +0000 UTC" firstStartedPulling="2026-02-18 14:11:22.739804744 +0000 UTC m=+715.235525666" lastFinishedPulling="2026-02-18 14:11:28.224385215 +0000 UTC m=+720.720106137" observedRunningTime="2026-02-18 14:11:28.787289872 +0000 UTC m=+721.283010804" watchObservedRunningTime="2026-02-18 14:11:28.791567068 +0000 UTC m=+721.287287990" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.809846 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podStartSLOduration=1.7629017679999999 podStartE2EDuration="6.809829732s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.098705776 +0000 UTC m=+715.594426698" lastFinishedPulling="2026-02-18 14:11:28.14563374 +0000 UTC m=+720.641354662" observedRunningTime="2026-02-18 14:11:28.80574095 +0000 UTC m=+721.301461872" watchObservedRunningTime="2026-02-18 14:11:28.809829732 +0000 UTC m=+721.305550654" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.827128 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podStartSLOduration=1.473390399 podStartE2EDuration="6.827109171s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:22.869386811 +0000 UTC m=+715.365107733" lastFinishedPulling="2026-02-18 14:11:28.223105583 +0000 UTC m=+720.718826505" observedRunningTime="2026-02-18 14:11:28.821350248 +0000 UTC m=+721.317071160" watchObservedRunningTime="2026-02-18 14:11:28.827109171 +0000 UTC m=+721.322830093" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.840468 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.499322572 podStartE2EDuration="6.840427911s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.885784818 +0000 UTC m=+716.381505740" lastFinishedPulling="2026-02-18 14:11:28.226890157 +0000 UTC m=+720.722611079" observedRunningTime="2026-02-18 14:11:28.838632157 +0000 UTC m=+721.334353069" watchObservedRunningTime="2026-02-18 14:11:28.840427911 +0000 UTC m=+721.336148833" Feb 18 14:11:29 crc kubenswrapper[4739]: I0218 14:11:29.372603 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:11:29 crc kubenswrapper[4739]: I0218 14:11:29.372686 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"3742d5b78014809eaa56cf845ee9ae4816d365c82219869773ab5acbcde93dfc"} Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751834 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.757244 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"ff8a9dfc4df6268def608077d571f8fb0f116e21ba2fd64008e4d7e87caa8782"} Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.766124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.769272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.789137 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podStartSLOduration=1.695161985 podStartE2EDuration="8.789101156s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.316312709 +0000 UTC m=+715.812033631" lastFinishedPulling="2026-02-18 14:11:30.41025188 +0000 UTC m=+722.905972802" observedRunningTime="2026-02-18 14:11:30.772063973 +0000 UTC m=+723.267784905" watchObservedRunningTime="2026-02-18 14:11:30.789101156 +0000 UTC m=+723.284822118" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.854873 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podStartSLOduration=1.603985851 podStartE2EDuration="8.854854869s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.165607547 +0000 UTC m=+715.661328479" lastFinishedPulling="2026-02-18 14:11:30.416476575 +0000 UTC m=+722.912197497" observedRunningTime="2026-02-18 14:11:30.843258321 +0000 UTC m=+723.338979263" watchObservedRunningTime="2026-02-18 14:11:30.854854869 +0000 UTC m=+723.350575791" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.766345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.766426 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.779311 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.783860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.442930 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.443337 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.679837 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.683772 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.158807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.440926 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.602607 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:53 crc kubenswrapper[4739]: I0218 14:11:53.440669 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 14:11:53 crc kubenswrapper[4739]: I0218 14:11:53.440734 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.372514 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373084 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373170 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373866 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373924 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" gracePeriod=600 Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.971765 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" exitCode=0 Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.971829 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.972044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.972064 4739 scope.go:117] "RemoveContainer" containerID="e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" Feb 18 14:12:01 crc kubenswrapper[4739]: I0218 14:12:01.872562 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 14:12:03 crc kubenswrapper[4739]: I0218 14:12:03.439279 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:12:03 crc kubenswrapper[4739]: I0218 14:12:03.439572 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:12:13 crc kubenswrapper[4739]: I0218 14:12:13.439000 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:12:13 crc kubenswrapper[4739]: I0218 14:12:13.439297 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:12:23 crc kubenswrapper[4739]: I0218 14:12:23.442492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.095284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.096678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105438 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zpmx2" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105779 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.120799 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.129774 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.165244 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.165784 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-s5dgm metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-rhjbv" podUID="aa1b5b42-cc82-48f9-9cf8-9da8994d5199" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.276967 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.284372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347636 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347797 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.347880 4739 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.347939 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver podName:aa1b5b42-cc82-48f9-9cf8-9da8994d5199 nodeName:}" failed. No retries permitted until 2026-02-18 14:12:41.847921222 +0000 UTC m=+794.343642154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver") pod "collector-rhjbv" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199") : secret "collector-syslog-receiver" not found Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.348819 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.348908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.349301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.349592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.353846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.354699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.361049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.380434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.386072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550402 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550496 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550554 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550582 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir" (OuterVolumeSpecName: "datadir") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551762 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.552125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config" (OuterVolumeSpecName: "config") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.552513 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.553966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm" (OuterVolumeSpecName: "kube-api-access-s5dgm") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "kube-api-access-s5dgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554000 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics" (OuterVolumeSpecName: "metrics") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token" (OuterVolumeSpecName: "sa-token") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp" (OuterVolumeSpecName: "tmp") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554898 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token" (OuterVolumeSpecName: "collector-token") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652234 4739 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652280 4739 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652293 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652305 4739 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652316 4739 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652327 4739 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652340 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652353 4739 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652366 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652382 4739 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.855478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.859385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.057879 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.060739 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.160297 4739 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.282953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.335788 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.346036 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.347122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349269 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349520 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zpmx2" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349643 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349860 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349992 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.352527 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.358549 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.359043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.363029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.418990 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1b5b42-cc82-48f9-9cf8-9da8994d5199" path="/var/lib/kubelet/pods/aa1b5b42-cc82-48f9-9cf8-9da8994d5199/volumes" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465991 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466471 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.470303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567902 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.568590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.569068 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.569330 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.570042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.570477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.571325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.572048 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.572481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.585966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.586671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.694706 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ptdrt" Feb 18 14:12:43 crc kubenswrapper[4739]: I0218 14:12:43.121661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:43 crc kubenswrapper[4739]: W0218 14:12:43.125654 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3df5da_d291_44d1_890f_4f094d9e8301.slice/crio-87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7 WatchSource:0}: Error finding container 87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7: Status 404 returned error can't find the container with id 87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7 Feb 18 14:12:43 crc kubenswrapper[4739]: I0218 14:12:43.292823 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ptdrt" event={"ID":"3d3df5da-d291-44d1-890f-4f094d9e8301","Type":"ContainerStarted","Data":"87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7"} Feb 18 14:12:49 crc kubenswrapper[4739]: I0218 14:12:49.338594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ptdrt" event={"ID":"3d3df5da-d291-44d1-890f-4f094d9e8301","Type":"ContainerStarted","Data":"e5deeafef9dfa5065788c3d8bbe69dcaa4b097f1784edab75ef3b093d266bdd6"} Feb 18 14:12:49 crc kubenswrapper[4739]: I0218 14:12:49.361473 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ptdrt" podStartSLOduration=2.321570688 podStartE2EDuration="7.361459434s" podCreationTimestamp="2026-02-18 14:12:42 +0000 UTC" firstStartedPulling="2026-02-18 14:12:43.127489451 +0000 UTC m=+795.623210373" lastFinishedPulling="2026-02-18 14:12:48.167378197 +0000 UTC m=+800.663099119" observedRunningTime="2026-02-18 14:12:49.359492656 +0000 UTC m=+801.855213578" watchObservedRunningTime="2026-02-18 14:12:49.361459434 +0000 UTC m=+801.857180366" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.602047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.605150 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.606667 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.614278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.656964 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.657030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.657056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.759069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.780390 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.970895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:21 crc kubenswrapper[4739]: I0218 14:13:21.428537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:21 crc kubenswrapper[4739]: I0218 14:13:21.568512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerStarted","Data":"03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1"} Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.579267 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="26ad8b06e108e948d219f5bf70871fd3097f85023f35ef81fd8e5c1b2be6f5d7" exitCode=0 Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.579386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"26ad8b06e108e948d219f5bf70871fd3097f85023f35ef81fd8e5c1b2be6f5d7"} Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.930973 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.933255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.953313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.093934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.093990 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.117746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.258536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.658169 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:23 crc kubenswrapper[4739]: W0218 14:13:23.677822 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b40ab76_c055_427e_9e8a_f553ae86113c.slice/crio-472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772 WatchSource:0}: Error finding container 472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772: Status 404 returned error can't find the container with id 472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596116 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" exitCode=0 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47"} Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596628 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772"} Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.599632 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="50a53bc980ed07dc86602f6de51cd28e5b32eab649a9d4da648c3c3de6a9cc42" exitCode=0 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.599692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"50a53bc980ed07dc86602f6de51cd28e5b32eab649a9d4da648c3c3de6a9cc42"} Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.609295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.613144 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="1e2468c02ad86c812d40581838ffe7c6492e6248aa2f79e8096408c8f16ebdbd" exitCode=0 Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.613187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"1e2468c02ad86c812d40581838ffe7c6492e6248aa2f79e8096408c8f16ebdbd"} Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.620303 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" exitCode=0 Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.620341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.917782 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.049009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle" (OuterVolumeSpecName: "bundle") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.049301 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.066861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util" (OuterVolumeSpecName: "util") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.083618 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz" (OuterVolumeSpecName: "kube-api-access-w85qz") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "kube-api-access-w85qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.154361 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.154398 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628140 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1"} Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628590 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.630321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.653117 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qm8vl" podStartSLOduration=3.206870662 podStartE2EDuration="5.653099223s" podCreationTimestamp="2026-02-18 14:13:22 +0000 UTC" firstStartedPulling="2026-02-18 14:13:24.598832482 +0000 UTC m=+837.094553404" lastFinishedPulling="2026-02-18 14:13:27.045061043 +0000 UTC m=+839.540781965" observedRunningTime="2026-02-18 14:13:27.649149437 +0000 UTC m=+840.144870369" watchObservedRunningTime="2026-02-18 14:13:27.653099223 +0000 UTC m=+840.148820145" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419513 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="pull" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419525 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="pull" Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419536 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419542 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419559 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="util" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419565 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="util" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.420271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422119 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jfk6h" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422289 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.432629 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.506218 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.607576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.628622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.737588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:31 crc kubenswrapper[4739]: I0218 14:13:31.272956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:31 crc kubenswrapper[4739]: W0218 14:13:31.279715 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5c1234_49df_4f31_842f_cdaf04adff3c.slice/crio-a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d WatchSource:0}: Error finding container a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d: Status 404 returned error can't find the container with id a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d Feb 18 14:13:31 crc kubenswrapper[4739]: I0218 14:13:31.658890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" event={"ID":"2f5c1234-49df-4f31-842f-cdaf04adff3c","Type":"ContainerStarted","Data":"a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d"} Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.259019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.259372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.301706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.732812 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:34 crc kubenswrapper[4739]: I0218 14:13:34.694276 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" event={"ID":"2f5c1234-49df-4f31-842f-cdaf04adff3c","Type":"ContainerStarted","Data":"56b4af6f1217a04bbd1a405ce01403d508c4770c8780d7c6d34a1e41809945b5"} Feb 18 14:13:34 crc kubenswrapper[4739]: I0218 14:13:34.712555 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" podStartSLOduration=2.45339476 podStartE2EDuration="4.712520443s" podCreationTimestamp="2026-02-18 14:13:30 +0000 UTC" firstStartedPulling="2026-02-18 14:13:31.281629028 +0000 UTC m=+843.777349950" lastFinishedPulling="2026-02-18 14:13:33.540754711 +0000 UTC m=+846.036475633" observedRunningTime="2026-02-18 14:13:34.71157304 +0000 UTC m=+847.207293962" watchObservedRunningTime="2026-02-18 14:13:34.712520443 +0000 UTC m=+847.208241445" Feb 18 14:13:35 crc kubenswrapper[4739]: I0218 14:13:35.720185 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:35 crc kubenswrapper[4739]: I0218 14:13:35.720491 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qm8vl" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" containerID="cri-o://d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" gracePeriod=2 Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.103185 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.204595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities" (OuterVolumeSpecName: "utilities") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.209406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc" (OuterVolumeSpecName: "kube-api-access-2fkwc") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "kube-api-access-2fkwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.305696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.305737 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712495 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" exitCode=0 Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772"} Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712594 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712607 4739 scope.go:117] "RemoveContainer" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.732531 4739 scope.go:117] "RemoveContainer" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.760652 4739 scope.go:117] "RemoveContainer" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.777583 4739 scope.go:117] "RemoveContainer" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.781085 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": container with ID starting with d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0 not found: ID does not exist" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781151 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} err="failed to get container status \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": rpc error: code = NotFound desc = could not find container \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": container with ID starting with d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0 not found: ID does not exist" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781192 4739 scope.go:117] "RemoveContainer" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.781540 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": container with ID starting with 8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f not found: ID does not exist" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781579 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} err="failed to get container status \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": rpc error: code = NotFound desc = could not find container \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": container with ID starting with 8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f not found: ID does not exist" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781603 4739 scope.go:117] "RemoveContainer" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.782032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": container with ID starting with fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47 not found: ID does not exist" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.782067 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47"} err="failed to get container status \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": rpc error: code = NotFound desc = could not find container \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": container with ID starting with fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47 not found: ID does not exist" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.723341 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.726470 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.941052 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.946657 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:38 crc kubenswrapper[4739]: I0218 14:13:38.421182 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" path="/var/lib/kubelet/pods/1b40ab76-c055-427e-9e8a-f553ae86113c/volumes" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.543651 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544657 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-utilities" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544688 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-utilities" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544701 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-content" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-content" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544878 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.545888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.548121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bqjx6" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.552167 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.553188 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.554338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.557659 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.569592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.574980 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xwm5v"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.576191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698908 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.699010 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.699054 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.710525 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.715675 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.717435 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.717531 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.723952 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ltrzj" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.727703 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.800903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.800973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.801482 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801496 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.801554 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair podName:ff0bf868-48fc-48a7-845d-3286c1dd16f0 nodeName:}" failed. No retries permitted until 2026-02-18 14:13:42.301532683 +0000 UTC m=+854.797253605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair") pod "nmstate-webhook-866bcb46dc-wtz97" (UID: "ff0bf868-48fc-48a7-845d-3286c1dd16f0") : secret "openshift-nmstate-webhook" not found Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.819598 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.824950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.829774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.868807 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.890880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.892384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.895869 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.908642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.924206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.927831 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.946104 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010105 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.041188 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.111677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.113829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.114469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.115302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.115840 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.122181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.122493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.136176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.281901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.318547 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.322067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.372575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.480975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.570566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:42 crc kubenswrapper[4739]: W0218 14:13:42.571178 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292e9bf2_9674_423f_9ba5_4e83ff259a06.slice/crio-7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e WatchSource:0}: Error finding container 7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e: Status 404 returned error can't find the container with id 7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.734991 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.754191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"db934d01b5ad14806be80372f23deb93efa9d5ab36049317a3bbb8668bca66c5"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.755428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xwm5v" event={"ID":"547a8c99-05a3-45bf-9e45-785d6cdb8fb5","Type":"ContainerStarted","Data":"11ca5504b72f9aa7707686a2a3fee5372fa6474e212bf2540858e3cd76434747"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.757278 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerStarted","Data":"df9030b739dbc83cef12914ae8d05fcfaf3c9ae9c31af8304d4b753fc912b097"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.759216 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" event={"ID":"292e9bf2-9674-423f-9ba5-4e83ff259a06","Type":"ContainerStarted","Data":"7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.895383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.770306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" event={"ID":"ff0bf868-48fc-48a7-845d-3286c1dd16f0","Type":"ContainerStarted","Data":"e72279406b8aa4424db9dd94e06b27a62fb2614ebf2ab1c6e5b7641fdb647dc5"} Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.772028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerStarted","Data":"0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f"} Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.799344 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58cc898c97-gzzx9" podStartSLOduration=2.799298815 podStartE2EDuration="2.799298815s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:13:43.792936252 +0000 UTC m=+856.288657184" watchObservedRunningTime="2026-02-18 14:13:43.799298815 +0000 UTC m=+856.295019737" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.788964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" event={"ID":"292e9bf2-9674-423f-9ba5-4e83ff259a06","Type":"ContainerStarted","Data":"6e9a3456c2bbfed427d2566c878b3063321f404582a6342268041468bfa5cd9d"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.792073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" event={"ID":"ff0bf868-48fc-48a7-845d-3286c1dd16f0","Type":"ContainerStarted","Data":"b30eef48cdad31e60230ed1e35d86c82376d5afd7e030353eb6a5ee68ac7bff3"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.792343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.794089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"ebede555442eacc2748b40c607b90da96e870241f189750b9363114a60bcdf88"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.796077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xwm5v" event={"ID":"547a8c99-05a3-45bf-9e45-785d6cdb8fb5","Type":"ContainerStarted","Data":"d5026099f7646b3ba5acdf68b47de85594cce7b67c2d1abc5c66313226ee4178"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.796322 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.810728 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" podStartSLOduration=2.2918268 podStartE2EDuration="4.810710016s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.580805892 +0000 UTC m=+855.076526824" lastFinishedPulling="2026-02-18 14:13:45.099689078 +0000 UTC m=+857.595410040" observedRunningTime="2026-02-18 14:13:45.80669578 +0000 UTC m=+858.302416712" watchObservedRunningTime="2026-02-18 14:13:45.810710016 +0000 UTC m=+858.306430938" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.829819 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podStartSLOduration=2.63595184 podStartE2EDuration="4.829799955s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.904244745 +0000 UTC m=+855.399965667" lastFinishedPulling="2026-02-18 14:13:45.09809286 +0000 UTC m=+857.593813782" observedRunningTime="2026-02-18 14:13:45.82584845 +0000 UTC m=+858.321569372" watchObservedRunningTime="2026-02-18 14:13:45.829799955 +0000 UTC m=+858.325520887" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.844467 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xwm5v" podStartSLOduration=1.65273233 podStartE2EDuration="4.844433137s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:41.952162713 +0000 UTC m=+854.447883635" lastFinishedPulling="2026-02-18 14:13:45.14386348 +0000 UTC m=+857.639584442" observedRunningTime="2026-02-18 14:13:45.843784151 +0000 UTC m=+858.339505083" watchObservedRunningTime="2026-02-18 14:13:45.844433137 +0000 UTC m=+858.340154059" Feb 18 14:13:47 crc kubenswrapper[4739]: I0218 14:13:47.816372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"c63a3d29fbd7009e66d95402576bdeeb8ab45a6edf7e55504cea6dbb0ea79c8f"} Feb 18 14:13:47 crc kubenswrapper[4739]: I0218 14:13:47.837819 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" podStartSLOduration=1.896921749 podStartE2EDuration="6.837800553s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.384060243 +0000 UTC m=+854.879781165" lastFinishedPulling="2026-02-18 14:13:47.324939047 +0000 UTC m=+859.820659969" observedRunningTime="2026-02-18 14:13:47.833780486 +0000 UTC m=+860.329501428" watchObservedRunningTime="2026-02-18 14:13:47.837800553 +0000 UTC m=+860.333521465" Feb 18 14:13:51 crc kubenswrapper[4739]: I0218 14:13:51.927573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.282018 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.282066 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.286484 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.854405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.911569 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:13:59 crc kubenswrapper[4739]: I0218 14:13:59.372873 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:13:59 crc kubenswrapper[4739]: I0218 14:13:59.373249 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:02 crc kubenswrapper[4739]: I0218 14:14:02.487060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:14:17 crc kubenswrapper[4739]: I0218 14:14:17.958233 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-796648847c-cwj5j" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" containerID="cri-o://ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" gracePeriod=15 Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.444320 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-796648847c-cwj5j_d4490109-c2b2-4264-b163-1e259f4b335c/console/0.log" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.444773 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511906 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512815 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config" (OuterVolumeSpecName: "console-config") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512807 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513517 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513544 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513558 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.514985 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca" (OuterVolumeSpecName: "service-ca") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.521617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.530722 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.536127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p" (OuterVolumeSpecName: "kube-api-access-v824p") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "kube-api-access-v824p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615902 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615966 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615986 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.616002 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.076997 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-796648847c-cwj5j_d4490109-c2b2-4264-b163-1e259f4b335c/console/0.log" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077288 4739 generic.go:334] "Generic (PLEG): container finished" podID="d4490109-c2b2-4264-b163-1e259f4b335c" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" exitCode=2 Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerDied","Data":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077350 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerDied","Data":"ced41aeb18b143d7cb7b37389d8e7093c6f932a8b69ee8fd71755fd592dcd4fa"} Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077369 4739 scope.go:117] "RemoveContainer" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077619 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.110160 4739 scope.go:117] "RemoveContainer" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: E0218 14:14:19.112094 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": container with ID starting with ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000 not found: ID does not exist" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.112173 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} err="failed to get container status \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": rpc error: code = NotFound desc = could not find container \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": container with ID starting with ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000 not found: ID does not exist" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.120564 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.133562 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856160 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:19 crc kubenswrapper[4739]: E0218 14:14:19.856536 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856551 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856716 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.858003 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.860665 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.869051 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941043 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043821 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.064522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.174687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.422195 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" path="/var/lib/kubelet/pods/d4490109-c2b2-4264-b163-1e259f4b335c/volumes" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.652652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098672 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="1a7b202e80c5eb13ad67a84ac5da7dbf5a09866eb3bb2c54dcc3f3e85e85eaab" exitCode=0 Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"1a7b202e80c5eb13ad67a84ac5da7dbf5a09866eb3bb2c54dcc3f3e85e85eaab"} Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerStarted","Data":"16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac"} Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.100366 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:14:23 crc kubenswrapper[4739]: I0218 14:14:23.113154 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="5d7c58e409f79b400684d32c3e68db1d709a14c1b605f47d3dcc69243875b01c" exitCode=0 Feb 18 14:14:23 crc kubenswrapper[4739]: I0218 14:14:23.113264 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"5d7c58e409f79b400684d32c3e68db1d709a14c1b605f47d3dcc69243875b01c"} Feb 18 14:14:24 crc kubenswrapper[4739]: I0218 14:14:24.123312 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="aa216d02d45707b907c1ea5ff97cba6cdb5c1e78b62b23811ea6dc4ed59a01ca" exitCode=0 Feb 18 14:14:24 crc kubenswrapper[4739]: I0218 14:14:24.123374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"aa216d02d45707b907c1ea5ff97cba6cdb5c1e78b62b23811ea6dc4ed59a01ca"} Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.411955 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434325 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.435667 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle" (OuterVolumeSpecName: "bundle") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.441297 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs" (OuterVolumeSpecName: "kube-api-access-zw4qs") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "kube-api-access-zw4qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.449523 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util" (OuterVolumeSpecName: "util") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536257 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536292 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536304 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac"} Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138552 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac" Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:29 crc kubenswrapper[4739]: I0218 14:14:29.373036 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:14:29 crc kubenswrapper[4739]: I0218 14:14:29.373121 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.156772 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157875 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157893 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157925 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="util" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157935 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="util" Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157946 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="pull" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157955 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="pull" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.158149 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.158917 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.161842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162485 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t5zkn" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162905 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.163755 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.176288 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289610 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391612 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.398182 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.398653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.416388 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.479584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.612317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.613245 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617829 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n6rkn" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.693086 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798948 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.810217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.835628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.835993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.934132 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.166843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.210089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"5b6710b41c8c3c3644f4b8c7ac01fa4faf08df9fe0f14b63b0e3bdea2b28ef57"} Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.408879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:36 crc kubenswrapper[4739]: I0218 14:14:36.217714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"eeb5cddbd6c550ba6e509048f55545f5fc2085fd1334451a36c2b9dc38277cd1"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.289869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.290398 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.291750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.291855 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.310330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podStartSLOduration=2.128336893 podStartE2EDuration="8.310308488s" podCreationTimestamp="2026-02-18 14:14:34 +0000 UTC" firstStartedPulling="2026-02-18 14:14:35.427820038 +0000 UTC m=+907.923540960" lastFinishedPulling="2026-02-18 14:14:41.609791633 +0000 UTC m=+914.105512555" observedRunningTime="2026-02-18 14:14:42.30719526 +0000 UTC m=+914.802916202" watchObservedRunningTime="2026-02-18 14:14:42.310308488 +0000 UTC m=+914.806029410" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.328742 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podStartSLOduration=1.911598516 podStartE2EDuration="8.328725244s" podCreationTimestamp="2026-02-18 14:14:34 +0000 UTC" firstStartedPulling="2026-02-18 14:14:35.173823939 +0000 UTC m=+907.669544861" lastFinishedPulling="2026-02-18 14:14:41.590950667 +0000 UTC m=+914.086671589" observedRunningTime="2026-02-18 14:14:42.325316809 +0000 UTC m=+914.821037731" watchObservedRunningTime="2026-02-18 14:14:42.328725244 +0000 UTC m=+914.824446166" Feb 18 14:14:54 crc kubenswrapper[4739]: I0218 14:14:54.938908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.372976 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.373583 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.373637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.374319 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.374373 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" gracePeriod=600 Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.191063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.192721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.195402 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.195436 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.207748 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342164 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.412763 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" exitCode=0 Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418967 4739 scope.go:117] "RemoveContainer" containerID="7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.443879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.443969 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.444072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.445019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.457834 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.474873 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.519432 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.945963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: W0218 14:15:00.950330 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2918ab_f9b2_46b1_9895_7de44312e98e.slice/crio-e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101 WatchSource:0}: Error finding container e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101: Status 404 returned error can't find the container with id e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101 Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425076 4739 generic.go:334] "Generic (PLEG): container finished" podID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerID="a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993" exitCode=0 Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerDied","Data":"a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993"} Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerStarted","Data":"e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101"} Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.538938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.540624 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.553303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.660875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.661175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.661257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.763424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.763571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.783465 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.856941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.408365 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:02 crc kubenswrapper[4739]: W0218 14:15:02.412965 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91302fcf_f057_4e35_9287_c67dfb9b396b.slice/crio-324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98 WatchSource:0}: Error finding container 324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98: Status 404 returned error can't find the container with id 324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98 Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.433832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98"} Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.829835 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989417 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989667 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.990896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.996296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.996700 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh" (OuterVolumeSpecName: "kube-api-access-bx6sh") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "kube-api-access-bx6sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.091996 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.092075 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.092087 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.444645 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404" exitCode=0 Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.444766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404"} Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerDied","Data":"e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101"} Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448086 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.324206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:04 crc kubenswrapper[4739]: E0218 14:15:04.325100 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.325164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.325346 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.326406 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.337579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.457498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176"} Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518662 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518709 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621833 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.644820 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.944255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.408024 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.468107 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176" exitCode=0 Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.468509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176"} Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.470853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"41b5b1fa97b1f509032d4fb0932b3650e238b4807fc2c4bec6abbcc9cb202890"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.484061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.486144 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f" exitCode=0 Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.486179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.511216 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8ss8" podStartSLOduration=3.094751954 podStartE2EDuration="5.511199311s" podCreationTimestamp="2026-02-18 14:15:01 +0000 UTC" firstStartedPulling="2026-02-18 14:15:03.447119799 +0000 UTC m=+935.942840721" lastFinishedPulling="2026-02-18 14:15:05.863567146 +0000 UTC m=+938.359288078" observedRunningTime="2026-02-18 14:15:06.506177627 +0000 UTC m=+939.001898569" watchObservedRunningTime="2026-02-18 14:15:06.511199311 +0000 UTC m=+939.006920233" Feb 18 14:15:07 crc kubenswrapper[4739]: I0218 14:15:07.494154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108"} Feb 18 14:15:08 crc kubenswrapper[4739]: I0218 14:15:08.503737 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108" exitCode=0 Feb 18 14:15:08 crc kubenswrapper[4739]: I0218 14:15:08.503902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108"} Feb 18 14:15:09 crc kubenswrapper[4739]: I0218 14:15:09.521961 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f"} Feb 18 14:15:09 crc kubenswrapper[4739]: I0218 14:15:09.540330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f88z9" podStartSLOduration=3.108443087 podStartE2EDuration="5.540314457s" podCreationTimestamp="2026-02-18 14:15:04 +0000 UTC" firstStartedPulling="2026-02-18 14:15:06.487645738 +0000 UTC m=+938.983366660" lastFinishedPulling="2026-02-18 14:15:08.919517108 +0000 UTC m=+941.415238030" observedRunningTime="2026-02-18 14:15:09.539655491 +0000 UTC m=+942.035376433" watchObservedRunningTime="2026-02-18 14:15:09.540314457 +0000 UTC m=+942.036035369" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.721191 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.723935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.743119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827304 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.857682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.857745 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.906038 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928658 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.929138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.929173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.954431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.070601 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.584543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:12 crc kubenswrapper[4739]: W0218 14:15:12.592867 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1d69322_06a6_4526_bb0c_be78ad5cd30d.slice/crio-e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec WatchSource:0}: Error finding container e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec: Status 404 returned error can't find the container with id e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.595176 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.552990 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" exitCode=0 Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.553055 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010"} Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.553388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec"} Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.483071 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.563901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.712118 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.712409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8ss8" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" containerID="cri-o://446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" gracePeriod=2 Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.945287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.945342 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.998785 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.166190 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-w8l6z"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.172241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-55s7l" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174895 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.176465 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.177653 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.181004 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.194985 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.276056 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8gqkq"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286260 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286350 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.288974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289622 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.295103 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d5sjc" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.298849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.299084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.299129 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.316284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.320100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.323079 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.344669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.391925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.392655 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.392831 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert podName:bf495248-0dde-4619-bce7-2cbbda1fd646 nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.892764175 +0000 UTC m=+948.388485097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert") pod "frr-k8s-webhook-server-78b44bf5bb-q8h4v" (UID: "bf495248-0dde-4619-bce7-2cbbda1fd646") : secret "frr-k8s-webhook-server-cert" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393803 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393839 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.394045 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.394092 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs podName:8ee20c2c-abb7-44a8-a5f9-8cacfce6f781 nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.894077828 +0000 UTC m=+948.389798750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs") pod "frr-k8s-w8l6z" (UID: "8ee20c2c-abb7-44a8-a5f9-8cacfce6f781") : secret "frr-k8s-certs-secret" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.394561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.415085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.416676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496666 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496774 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.496801 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.496856 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist podName:65fdc711-6806-433f-9f62-a09e816c6acf nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.996841222 +0000 UTC m=+948.492562144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist") pod "speaker-8gqkq" (UID: "65fdc711-6806-433f-9f62-a09e816c6acf") : secret "metallb-memberlist" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.497501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.502233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.527158 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.573363 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" exitCode=0 Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.573466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.576040 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" exitCode=0 Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.576118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89"} Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.620539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.621643 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.640975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.673742 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.677831 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.903660 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.904031 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.909082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.909440 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.006283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:16 crc kubenswrapper[4739]: E0218 14:15:16.006528 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:15:16 crc kubenswrapper[4739]: E0218 14:15:16.006629 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist podName:65fdc711-6806-433f-9f62-a09e816c6acf nodeName:}" failed. No retries permitted until 2026-02-18 14:15:17.006603353 +0000 UTC m=+949.502324275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist") pod "speaker-8gqkq" (UID: "65fdc711-6806-433f-9f62-a09e816c6acf") : secret "metallb-memberlist" not found Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.097014 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.107199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.164107 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.309762 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311017 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.312334 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities" (OuterVolumeSpecName: "utilities") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.315575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn" (OuterVolumeSpecName: "kube-api-access-988kn") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "kube-api-access-988kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.413789 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.413829 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.454302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.515492 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.595765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"72302f965aab99323370179ac49243577654ec94472789c9404e6d9268db802d"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.600492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607101 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607117 4739 scope.go:117] "RemoveContainer" containerID="446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.611898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.611926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"4d75ca7837bbd3dcbc04c2d2a485376c2bbd7ba61474af3350c20db033a86d3a"} Feb 18 14:15:16 crc kubenswrapper[4739]: W0218 14:15:16.620298 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf495248_0dde_4619_bce7_2cbbda1fd646.slice/crio-10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12 WatchSource:0}: Error finding container 10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12: Status 404 returned error can't find the container with id 10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12 Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.668732 4739 scope.go:117] "RemoveContainer" containerID="e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.694137 4739 scope.go:117] "RemoveContainer" containerID="71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.702226 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.708605 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.024113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.030094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.169323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: W0218 14:15:17.199753 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fdc711_6806_433f_9f62_a09e816c6acf.slice/crio-4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875 WatchSource:0}: Error finding container 4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875: Status 404 returned error can't find the container with id 4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875 Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.620088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"6d1d5ce500775d152181c59d53937e176ad0f24dbd787c28625bb76e8ba661ec"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.621511 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.624902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.626964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" event={"ID":"bf495248-0dde-4619-bce7-2cbbda1fd646","Type":"ContainerStarted","Data":"10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.631432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.631497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.649709 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-tr2nx" podStartSLOduration=2.649685873 podStartE2EDuration="2.649685873s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:15:17.647615811 +0000 UTC m=+950.143336733" watchObservedRunningTime="2026-02-18 14:15:17.649685873 +0000 UTC m=+950.145406805" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.672477 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6ms6" podStartSLOduration=3.678523792 podStartE2EDuration="6.672458917s" podCreationTimestamp="2026-02-18 14:15:11 +0000 UTC" firstStartedPulling="2026-02-18 14:15:13.554468841 +0000 UTC m=+946.050189763" lastFinishedPulling="2026-02-18 14:15:16.548403966 +0000 UTC m=+949.044124888" observedRunningTime="2026-02-18 14:15:17.669460852 +0000 UTC m=+950.165181794" watchObservedRunningTime="2026-02-18 14:15:17.672458917 +0000 UTC m=+950.168179849" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.112421 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.112694 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f88z9" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" containerID="cri-o://cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" gracePeriod=2 Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.423624 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" path="/var/lib/kubelet/pods/91302fcf-f057-4e35-9287-c67dfb9b396b/volumes" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.641894 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" exitCode=0 Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.641966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f"} Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.644798 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"1dd001ef3c188c7b8b2c41bc5a869d1fbda4c7d1806b440e90793ffcabf78902"} Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.645370 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.667653 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8gqkq" podStartSLOduration=3.667636415 podStartE2EDuration="3.667636415s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:15:18.663728959 +0000 UTC m=+951.159449901" watchObservedRunningTime="2026-02-18 14:15:18.667636415 +0000 UTC m=+951.163357337" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.272503 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276303 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276487 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.278273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities" (OuterVolumeSpecName: "utilities") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.293156 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn" (OuterVolumeSpecName: "kube-api-access-jwcnn") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "kube-api-access-jwcnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.335003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378217 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378262 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378273 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"41b5b1fa97b1f509032d4fb0932b3650e238b4807fc2c4bec6abbcc9cb202890"} Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654644 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654666 4739 scope.go:117] "RemoveContainer" containerID="cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.697266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.706078 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.706915 4739 scope.go:117] "RemoveContainer" containerID="6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.741817 4739 scope.go:117] "RemoveContainer" containerID="53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f" Feb 18 14:15:20 crc kubenswrapper[4739]: I0218 14:15:20.422265 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" path="/var/lib/kubelet/pods/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8/volumes" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.070846 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.071063 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.120199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.741474 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:23 crc kubenswrapper[4739]: I0218 14:15:23.714821 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:24 crc kubenswrapper[4739]: I0218 14:15:24.700683 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6ms6" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" containerID="cri-o://45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" gracePeriod=2 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.214501 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383436 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.384782 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities" (OuterVolumeSpecName: "utilities") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.389128 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z" (OuterVolumeSpecName: "kube-api-access-t4w8z") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "kube-api-access-t4w8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.486521 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.486709 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.565089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.587817 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.708815 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="9fb59546736878df9c22754f43e72cb090776382ecdd7901f64cc1b5ca20d30f" exitCode=0 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.708885 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"9fb59546736878df9c22754f43e72cb090776382ecdd7901f64cc1b5ca20d30f"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713587 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" exitCode=0 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713700 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713777 4739 scope.go:117] "RemoveContainer" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.715069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" event={"ID":"bf495248-0dde-4619-bce7-2cbbda1fd646","Type":"ContainerStarted","Data":"f193f450786c60c2f37e5a77c47cc484056cd8e9abe8c794be08a7f19c0d6903"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.715643 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.739739 4739 scope.go:117] "RemoveContainer" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.753625 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podStartSLOduration=2.469712797 podStartE2EDuration="10.753607792s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="2026-02-18 14:15:16.623106416 +0000 UTC m=+949.118827338" lastFinishedPulling="2026-02-18 14:15:24.907001411 +0000 UTC m=+957.402722333" observedRunningTime="2026-02-18 14:15:25.75311574 +0000 UTC m=+958.248836672" watchObservedRunningTime="2026-02-18 14:15:25.753607792 +0000 UTC m=+958.249328714" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.777514 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.785624 4739 scope.go:117] "RemoveContainer" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.789581 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.808796 4739 scope.go:117] "RemoveContainer" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.809277 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": container with ID starting with 45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d not found: ID does not exist" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.809309 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} err="failed to get container status \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": rpc error: code = NotFound desc = could not find container \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": container with ID starting with 45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d not found: ID does not exist" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.809332 4739 scope.go:117] "RemoveContainer" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.810246 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": container with ID starting with 4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0 not found: ID does not exist" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810270 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} err="failed to get container status \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": rpc error: code = NotFound desc = could not find container \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": container with ID starting with 4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0 not found: ID does not exist" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810284 4739 scope.go:117] "RemoveContainer" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.810739 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": container with ID starting with 9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010 not found: ID does not exist" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810762 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010"} err="failed to get container status \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": rpc error: code = NotFound desc = could not find container \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": container with ID starting with 9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010 not found: ID does not exist" Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.420473 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" path="/var/lib/kubelet/pods/c1d69322-06a6-4526-bb0c-be78ad5cd30d/volumes" Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.724499 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="2e6b4ed84ac523c6e3507964c5be205d57bc199e6c95f3a77dd2245daf60fdb1" exitCode=0 Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.724572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"2e6b4ed84ac523c6e3507964c5be205d57bc199e6c95f3a77dd2245daf60fdb1"} Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.174303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.735046 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="b27ec3209a4a1bf86065d475c6c4fd1737d6aa46155833527d9114a5ddf2cfd7" exitCode=0 Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.735129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"b27ec3209a4a1bf86065d475c6c4fd1737d6aa46155833527d9114a5ddf2cfd7"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"a671b9560b84c2bc2337e7cd0dbd0611b4e01b445f1313409dce388c125db15e"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"ef158908c5c0a8407b5e65bec469b2eb70cab108e0e4cb3f92bca1b90e937911"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"239a0d1abe9b57abf7c29d7eb2654954b99c35d6af28d597ae1aa5e0324e8a86"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"d7ace940b5988463e3b8c7226207627946089b351b948bda4a9be22ff01d488d"} Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.763551 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"e7475631559454730a0a662325b3f48366fe7bb27b8e8120bbb67c00be5149a3"} Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.763987 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.806988 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-w8l6z" podStartSLOduration=6.353931564 podStartE2EDuration="14.806965077s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="2026-02-18 14:15:16.454553233 +0000 UTC m=+948.950274155" lastFinishedPulling="2026-02-18 14:15:24.907586746 +0000 UTC m=+957.403307668" observedRunningTime="2026-02-18 14:15:29.794915458 +0000 UTC m=+962.290636390" watchObservedRunningTime="2026-02-18 14:15:29.806965077 +0000 UTC m=+962.302686019" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821079 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821483 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821502 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821530 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821546 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821554 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821580 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821587 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821598 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821604 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821621 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821628 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821635 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821641 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821655 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821662 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821674 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821681 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821833 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821852 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821864 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.822495 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.828247 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.828875 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hnndv" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.829080 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.836122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.969195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.071582 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.095317 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.148528 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.643588 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:30 crc kubenswrapper[4739]: W0218 14:15:30.646414 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod963fc9d2_81a3_4bff_babb_9a1fb7115773.slice/crio-c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863 WatchSource:0}: Error finding container c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863: Status 404 returned error can't find the container with id c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863 Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.779972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerStarted","Data":"c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863"} Feb 18 14:15:31 crc kubenswrapper[4739]: I0218 14:15:31.097527 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:31 crc kubenswrapper[4739]: I0218 14:15:31.140015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.188036 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.790109 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.791248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.798925 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.946972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.048898 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.070213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.113927 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.561234 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.813726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"9505b21dad977e1c9574975d46a44ddbf423ca920e9dce1f532ba31a4a892548"} Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.815940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerStarted","Data":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.816063 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pkgt6" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" containerID="cri-o://7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" gracePeriod=2 Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.845281 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pkgt6" podStartSLOduration=2.665624203 podStartE2EDuration="5.845239606s" podCreationTimestamp="2026-02-18 14:15:29 +0000 UTC" firstStartedPulling="2026-02-18 14:15:30.648570363 +0000 UTC m=+963.144291285" lastFinishedPulling="2026-02-18 14:15:33.828185766 +0000 UTC m=+966.323906688" observedRunningTime="2026-02-18 14:15:34.83610933 +0000 UTC m=+967.331830252" watchObservedRunningTime="2026-02-18 14:15:34.845239606 +0000 UTC m=+967.340960568" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.302658 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.473969 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"963fc9d2-81a3-4bff-babb-9a1fb7115773\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.492596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr" (OuterVolumeSpecName: "kube-api-access-jpzxr") pod "963fc9d2-81a3-4bff-babb-9a1fb7115773" (UID: "963fc9d2-81a3-4bff-babb-9a1fb7115773"). InnerVolumeSpecName "kube-api-access-jpzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.576194 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.683187 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824562 4739 generic.go:334] "Generic (PLEG): container finished" podID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" exitCode=0 Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerDied","Data":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824632 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824647 4739 scope.go:117] "RemoveContainer" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerDied","Data":"c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863"} Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.851656 4739 scope.go:117] "RemoveContainer" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: E0218 14:15:35.852636 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": container with ID starting with 7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7 not found: ID does not exist" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.852690 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} err="failed to get container status \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": rpc error: code = NotFound desc = could not find container \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": container with ID starting with 7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7 not found: ID does not exist" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.867680 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.874005 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.170484 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.420706 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" path="/var/lib/kubelet/pods/963fc9d2-81a3-4bff-babb-9a1fb7115773/volumes" Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.833305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.857947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cnhvq" podStartSLOduration=3.367101265 podStartE2EDuration="3.857925997s" podCreationTimestamp="2026-02-18 14:15:33 +0000 UTC" firstStartedPulling="2026-02-18 14:15:34.570019152 +0000 UTC m=+967.065740074" lastFinishedPulling="2026-02-18 14:15:35.060843884 +0000 UTC m=+967.556564806" observedRunningTime="2026-02-18 14:15:36.850863502 +0000 UTC m=+969.346584424" watchObservedRunningTime="2026-02-18 14:15:36.857925997 +0000 UTC m=+969.353646919" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.114772 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.116406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.149958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.920690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:46 crc kubenswrapper[4739]: I0218 14:15:46.102234 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.261171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:48 crc kubenswrapper[4739]: E0218 14:15:48.261854 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.261870 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.262023 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.263112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.273347 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-klmk6" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.279548 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.396876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.396999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.397045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499289 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.500055 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.500190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.518395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.594669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.091342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.926047 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="32de7c6e239f1a69b1c74587fb009358e16bcf6b229d6593aea826e2cf650bb8" exitCode=0 Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.926201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"32de7c6e239f1a69b1c74587fb009358e16bcf6b229d6593aea826e2cf650bb8"} Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.927698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerStarted","Data":"6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f"} Feb 18 14:15:50 crc kubenswrapper[4739]: I0218 14:15:50.936907 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="f18e1854dd5dfb60bd94c9d9af2d4022fc31b8efbeb1359e9c55eb85e25412d8" exitCode=0 Feb 18 14:15:50 crc kubenswrapper[4739]: I0218 14:15:50.937006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"f18e1854dd5dfb60bd94c9d9af2d4022fc31b8efbeb1359e9c55eb85e25412d8"} Feb 18 14:15:51 crc kubenswrapper[4739]: I0218 14:15:51.950660 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="8e34dc4d4a2bf97a56bb2ed7f9d89a54d54cb6b68824876ac7647a80ec5532f1" exitCode=0 Feb 18 14:15:51 crc kubenswrapper[4739]: I0218 14:15:51.950704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"8e34dc4d4a2bf97a56bb2ed7f9d89a54d54cb6b68824876ac7647a80ec5532f1"} Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.295611 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.390355 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle" (OuterVolumeSpecName: "bundle") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.396651 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm" (OuterVolumeSpecName: "kube-api-access-4vqqm") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "kube-api-access-4vqqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.435042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util" (OuterVolumeSpecName: "util") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491762 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491793 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491802 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f"} Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969495 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969606 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.760708 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761269 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="util" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761285 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="util" Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761301 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="pull" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761306 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="pull" Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761325 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761331 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761495 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.762011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.768902 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-c9zzv" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.805153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.835457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.937813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.962689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:56 crc kubenswrapper[4739]: I0218 14:15:56.082139 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:56 crc kubenswrapper[4739]: I0218 14:15:56.690746 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:57 crc kubenswrapper[4739]: I0218 14:15:57.000181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" event={"ID":"8bf4ed0a-8055-462b-9324-1fa1c4f429b1","Type":"ContainerStarted","Data":"4b35c274b1e3a6ef1488630d4737649bac48378f7878ea6c2aaf192f7166ee92"} Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.057734 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" event={"ID":"8bf4ed0a-8055-462b-9324-1fa1c4f429b1","Type":"ContainerStarted","Data":"5759fd8109936917e6ed4c7e129fd30005aaf4dcfe90f5c9e8acb4c336baff58"} Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.059297 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.092831 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" podStartSLOduration=2.135728294 podStartE2EDuration="7.09281354s" podCreationTimestamp="2026-02-18 14:15:55 +0000 UTC" firstStartedPulling="2026-02-18 14:15:56.701422261 +0000 UTC m=+989.197143183" lastFinishedPulling="2026-02-18 14:16:01.658507507 +0000 UTC m=+994.154228429" observedRunningTime="2026-02-18 14:16:02.085293514 +0000 UTC m=+994.581014456" watchObservedRunningTime="2026-02-18 14:16:02.09281354 +0000 UTC m=+994.588534452" Feb 18 14:16:06 crc kubenswrapper[4739]: I0218 14:16:06.085534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.295085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.296813 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.299350 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p9bbp" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.306206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.307332 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.311827 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vkk6d" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.312594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.323957 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.339809 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.341082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.349384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sx45f" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.381060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.397413 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.417168 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.418492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.420340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vsf2g" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.453398 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.466279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.481657 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.482724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.492852 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.496574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-d59wz" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.516638 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.517644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.524722 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qdwzx" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.524869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.526591 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.529166 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.534510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fhpzj" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.549245 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.574564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.574965 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.591897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.593803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.593854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.599176 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-495vm" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.618550 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.625856 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.645098 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.663901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.685918 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.687686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.694393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sg7jb" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.699769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.701367 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.702993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.726644 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.727905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729277 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: E0218 14:16:29.729651 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:29 crc kubenswrapper[4739]: E0218 14:16:29.729693 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:30.229675905 +0000 UTC m=+1022.725396817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.734484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.737262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.744981 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kj4qq" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.745363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9ntrq" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.766321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.794841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.798484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.799630 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.800862 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.805093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.807070 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.808809 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.816254 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zkkf6" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.828306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.828397 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vmp8w" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.834857 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.834964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835032 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.844460 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.852082 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.865093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.903937 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.904974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.924580 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.014172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.015478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.015686 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.019409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.069157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.074468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.082391 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.086303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.090344 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.094433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.107280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.142046 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.145209 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.151501 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lw5tm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.152742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.199646 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.205316 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.212205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tgrn" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.219909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.241513 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.245985 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.24595285 +0000 UTC m=+1023.741673792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.277572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.292737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.303294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.303725 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.304836 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.308425 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5c4xb" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.323367 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.326606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.330811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmmgv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.332268 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.334145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342660 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342767 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.346519 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.348436 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.350904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.352498 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l9xfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.365305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.369103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.405853 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.407237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.410395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8w85g" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.444728 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.444896 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:30.944770282 +0000 UTC m=+1023.440491204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.445290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.479762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483656 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.484217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.485809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.485843 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.486127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.487427 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.487840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.489608 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kslv7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.489653 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f4lgz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.497731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.502104 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.506508 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.506578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.515788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4dh9w" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.518779 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559418 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559502 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.586736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.593274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.599895 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.601637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.609497 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.619991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8k5vx" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.621027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.621665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.669065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.670889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.670945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671188 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.671405 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.671473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.171457465 +0000 UTC m=+1023.667178387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.675138 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.675227 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.175199378 +0000 UTC m=+1023.670920300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.723345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.724136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.730734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.772678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.812121 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.917653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.951115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.976031 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.976497 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.976590 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.976565683 +0000 UTC m=+1024.472286605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.000620 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.034621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.060392 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.116393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.180752 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.181313 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.182817 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.182928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:32.182892631 +0000 UTC m=+1024.678613553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.185203 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.185269 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:32.185246669 +0000 UTC m=+1024.680967591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.291594 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.291849 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.292742 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:33.292718145 +0000 UTC m=+1025.788439067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.329713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" event={"ID":"61bc4b17-baf6-435c-9280-b97fcede913c","Type":"ContainerStarted","Data":"a1c3c3936aa497548c575e3a1dd2edd60e8994a617cf2d4c16c313b197d47d43"} Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.573823 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:31 crc kubenswrapper[4739]: W0218 14:16:31.596343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd617f67f_2577_418f_a367_42c366c17980.slice/crio-22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f WatchSource:0}: Error finding container 22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f: Status 404 returned error can't find the container with id 22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.603576 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.655945 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.014360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.014649 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.014717 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.014699462 +0000 UTC m=+1026.510420384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.218473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.218661 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218749 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218832 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.218809095 +0000 UTC m=+1026.714530077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218849 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218908 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.218891587 +0000 UTC m=+1026.714612589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.317254 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.322385 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40be8fff_51f0_467a_aca5_517e02eea23b.slice/crio-79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0 WatchSource:0}: Error finding container 79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0: Status 404 returned error can't find the container with id 79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.340249 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" event={"ID":"40be8fff-51f0-467a-aca5-517e02eea23b","Type":"ContainerStarted","Data":"79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.341723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" event={"ID":"c8f419fe-23b1-4a93-97fe-05071df32425","Type":"ContainerStarted","Data":"acb0115df78d85a449936d5c1c52b22ebb4e7bcb5fbaaae49254abad2a861fe8"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.342655 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" event={"ID":"d617f67f-2577-418f-a367-42c366c17980","Type":"ContainerStarted","Data":"22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.343495 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" event={"ID":"19470a60-c796-4a28-a0e2-65b50fa94ea6","Type":"ContainerStarted","Data":"84c39f8d9461fbebb201c04a60ce41eca031946d8167b261a6a5533899ecd27e"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.525426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.536472 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e8e2d9d_fbfe_409e_bf3e_ea47e48e1682.slice/crio-31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5 WatchSource:0}: Error finding container 31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5: Status 404 returned error can't find the container with id 31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.537872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.545272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.552188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.559622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.567130 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.592898 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b114d0a_837c_4f0c_b02a_db694bdab362.slice/crio-888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319 WatchSource:0}: Error finding container 888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319: Status 404 returned error can't find the container with id 888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319 Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.598797 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod209f2e6c_29e9_444b_b14a_10eadb782a59.slice/crio-8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827 WatchSource:0}: Error finding container 8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827: Status 404 returned error can't find the container with id 8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827 Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.612620 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60bad312_a989_43d1_87e6_6c6f10d1ae8f.slice/crio-91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba WatchSource:0}: Error finding container 91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba: Status 404 returned error can't find the container with id 91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.617582 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877f7fe3_168f_4b05_a88e_a7a11bf45e36.slice/crio-d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f WatchSource:0}: Error finding container d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f: Status 404 returned error can't find the container with id d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.971807 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8336a5f7_2ff0_440a_88b0_a6ab51692965.slice/crio-e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11 WatchSource:0}: Error finding container e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11: Status 404 returned error can't find the container with id e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.978155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.029558 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode19083b1_791a_4549_b64e_0bb0032abad2.slice/crio-752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6 WatchSource:0}: Error finding container 752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6: Status 404 returned error can't find the container with id 752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.058964 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.075543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.081331 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac911184_3930_4f7e_9d77_2cc9e7262ea6.slice/crio-516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08 WatchSource:0}: Error finding container 516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08: Status 404 returned error can't find the container with id 516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.092308 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.099176 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.110624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.311610 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.334887 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538f0d59_9eea_4f76_a310_f7f724593a1e.slice/crio-faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607 WatchSource:0}: Error finding container faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607: Status 404 returned error can't find the container with id faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.357528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" event={"ID":"8336a5f7-2ff0-440a-88b0-a6ab51692965","Type":"ContainerStarted","Data":"e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.360768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" event={"ID":"3b114d0a-837c-4f0c-b02a-db694bdab362","Type":"ContainerStarted","Data":"888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.361789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" event={"ID":"877f7fe3-168f-4b05-a88e-a7a11bf45e36","Type":"ContainerStarted","Data":"d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.364349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" event={"ID":"e19083b1-791a-4549-b64e-0bb0032abad2","Type":"ContainerStarted","Data":"752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.372542 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" event={"ID":"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1","Type":"ContainerStarted","Data":"fb40c1410d26319eb159847449fb9ae482c108aa746e969b93fbd85bbc0434ba"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.375054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"8ca939195772b46bdcc61b173814f4d1ea27b68e239e08817e9265fb0211513f"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.378649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" event={"ID":"60bad312-a989-43d1-87e6-6c6f10d1ae8f","Type":"ContainerStarted","Data":"91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.379772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" event={"ID":"ac911184-3930-4f7e-9d77-2cc9e7262ea6","Type":"ContainerStarted","Data":"516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.380755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" event={"ID":"caed7b7d-66db-4bd9-ba33-efc5f3951069","Type":"ContainerStarted","Data":"a65a75e9097a4778ddcd5c4d75982228aba4b618eec253fd1189dbbcd46fe452"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.381697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" event={"ID":"538f0d59-9eea-4f76-a310-f7f724593a1e","Type":"ContainerStarted","Data":"faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.382432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"4034c4d24ef9e1a0430cc9101561e1de57649244954346e3ddec6d84a716c7ac"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.385676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" event={"ID":"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682","Type":"ContainerStarted","Data":"31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.386978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" event={"ID":"209f2e6c-29e9-444b-b14a-10eadb782a59","Type":"ContainerStarted","Data":"8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.389517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.389654 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.389729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:37.389710798 +0000 UTC m=+1029.885431720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.461971 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.470814 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncr8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-jblfh_openstack-operators(6741b4b4-1817-4639-bdf6-b5be2729a1fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.470992 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.472792 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.102909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.103111 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.103612 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.103586305 +0000 UTC m=+1030.599307227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.307189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307421 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.307473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307558 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.307535074 +0000 UTC m=+1030.803256196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307634 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307780 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.307744959 +0000 UTC m=+1030.803466081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.398282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" event={"ID":"06163b75-4f40-42a0-83d8-70c935b9172c","Type":"ContainerStarted","Data":"72a837e466540fb33dc740a4e15d77d26716e20825fb8d345f62d8d560dea7c7"} Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.407253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"8a3c46d16c5456d759f7d03f158bf8e868b5cf3eeb0e970f3d7a255e6772bf42"} Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.408876 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:35 crc kubenswrapper[4739]: E0218 14:16:35.422912 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:37 crc kubenswrapper[4739]: I0218 14:16:37.398884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:37 crc kubenswrapper[4739]: E0218 14:16:37.399332 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:37 crc kubenswrapper[4739]: E0218 14:16:37.399474 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:45.399453856 +0000 UTC m=+1037.895174778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.117810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.118485 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.118548 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.118529132 +0000 UTC m=+1038.614250054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.321954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.322186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322437 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.322496691 +0000 UTC m=+1038.818217613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322995 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.323047 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.323036555 +0000 UTC m=+1038.818757477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.403182 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.403715 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fxlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-knpz9_openstack-operators(61bc4b17-baf6-435c-9280-b97fcede913c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.405563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.552576 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.459910 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.472932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.582507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.173915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.180294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.377999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.378110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: E0218 14:16:46.378139 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:46 crc kubenswrapper[4739]: E0218 14:16:46.378204 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:17:02.378186214 +0000 UTC m=+1054.873907136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.382666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.439061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.528346 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.528749 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcbfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-b9hds_openstack-operators(d617f67f-2577-418f-a367-42c366c17980): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.529984 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.584160 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.643943 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.644414 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97vmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-prt26_openstack-operators(209f2e6c-29e9-444b-b14a-10eadb782a59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.645578 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" Feb 18 14:16:50 crc kubenswrapper[4739]: E0218 14:16:50.607718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.995592 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.996166 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxhfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-47445_openstack-operators(c8f419fe-23b1-4a93-97fe-05071df32425): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.998319 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" Feb 18 14:16:52 crc kubenswrapper[4739]: E0218 14:16:52.619701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.705618 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.706168 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-xhkdh_openstack-operators(877f7fe3-168f-4b05-a88e-a7a11bf45e36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.707714 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.534162 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.534347 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fcsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-cdt9l_openstack-operators(3b114d0a-837c-4f0c-b02a-db694bdab362): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.535700 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.657416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.657497 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.246488 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.246717 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gz24r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-kssdd_openstack-operators(caed7b7d-66db-4bd9-ba33-efc5f3951069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.247917 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.662085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.638496 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.638735 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4tfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-8vh65_openstack-operators(92f1b9c3-1bdd-48ca-9a76-68ace2635cf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.641221 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.669543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" Feb 18 14:16:59 crc kubenswrapper[4739]: I0218 14:16:59.372871 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:16:59 crc kubenswrapper[4739]: I0218 14:16:59.373300 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.574546 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.575172 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b27nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-s7fsm_openstack-operators(ac911184-3930-4f7e-9d77-2cc9e7262ea6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.576592 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.697067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.415205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.425221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.579905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.245856 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.246314 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pc7qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-hxdbh_openstack-operators(19470a60-c796-4a28-a0e2-65b50fa94ea6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.247796 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.726590 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.370772 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.371029 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbplc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-4lkbs_openstack-operators(8336a5f7-2ff0-440a-88b0-a6ab51692965): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.372290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.735521 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.879156 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.879380 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk9w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-m469j_openstack-operators(60bad312-a989-43d1-87e6-6c6f10d1ae8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.880692 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.506131 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.506562 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97z2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-4f4zc_openstack-operators(d34f7233-92b8-4803-ab81-0da45a4de925): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.507789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.743879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.744085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.607349 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.608896 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4blrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-rk7x9_openstack-operators(40be8fff-51f0-467a-aca5-517e02eea23b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.610264 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703679 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703749 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703911 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6z2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6956d67c5c-52bt7_openstack-operators(538f0d59-9eea-4f76-a310-f7f724593a1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.705126 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.758862 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.759107 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.320321 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.320602 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmxsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-q4vb2_openstack-operators(2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.323295 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.765828 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.099723 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.099972 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncr8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-jblfh_openstack-operators(6741b4b4-1817-4639-bdf6-b5be2729a1fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.101203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.530800 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.531586 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8j56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7gszz_openstack-operators(06163b75-4f40-42a0-83d8-70c935b9172c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.532906 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podUID="06163b75-4f40-42a0-83d8-70c935b9172c" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.807054 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podUID="06163b75-4f40-42a0-83d8-70c935b9172c" Feb 18 14:17:10 crc kubenswrapper[4739]: I0218 14:17:10.980755 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:17:11 crc kubenswrapper[4739]: W0218 14:17:11.283763 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52927612_b074_4573_aa63_41cbb1d704bf.slice/crio-7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c WatchSource:0}: Error finding container 7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c: Status 404 returned error can't find the container with id 7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.289374 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.377684 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.814480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" event={"ID":"877f7fe3-168f-4b05-a88e-a7a11bf45e36","Type":"ContainerStarted","Data":"371534f04aace7c53c2469bbbae9b5ced744e16ce26172792563ecd694b4570a"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.815787 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.821137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" event={"ID":"c8f419fe-23b1-4a93-97fe-05071df32425","Type":"ContainerStarted","Data":"4406471c71e4a2933549ab100f973cba46a0995206aef2a7133eeb9f42b27c4c"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.821823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.823067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.839365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" event={"ID":"e19083b1-791a-4549-b64e-0bb0032abad2","Type":"ContainerStarted","Data":"8fb2e79aa6360d6a5d350a553c0eadfbb0bdcf8fab1a2e66d211fa6472457468"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.840325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.846864 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podStartSLOduration=4.567263562 podStartE2EDuration="42.846835088s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.633266585 +0000 UTC m=+1025.128987507" lastFinishedPulling="2026-02-18 14:17:10.912838101 +0000 UTC m=+1063.408559033" observedRunningTime="2026-02-18 14:17:11.836869641 +0000 UTC m=+1064.332590583" watchObservedRunningTime="2026-02-18 14:17:11.846835088 +0000 UTC m=+1064.342556010" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.847243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" event={"ID":"61bc4b17-baf6-435c-9280-b97fcede913c","Type":"ContainerStarted","Data":"9c95a74b7c5f91247d0f3d3bf78efff2492a323360a862043ad22badf22170c7"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.848634 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.860723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" event={"ID":"8add2ed9-6416-4e9f-a3a1-f8a615962850","Type":"ContainerStarted","Data":"fb20f6681336822e5b3fde9390c367455cb2527599c8ef33b3dfd4dacb5d5012"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.860786 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" event={"ID":"8add2ed9-6416-4e9f-a3a1-f8a615962850","Type":"ContainerStarted","Data":"136e2e3d2aec2d866f777649ca5ce971a99d184f9a9708b48a7455bd547f4b77"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.863395 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.873956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" event={"ID":"caed7b7d-66db-4bd9-ba33-efc5f3951069","Type":"ContainerStarted","Data":"285c93b9afd3d340ad58d8694787c0f5e2930c20312607e55d96900e5c227db1"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.875169 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.888591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" event={"ID":"d617f67f-2577-418f-a367-42c366c17980","Type":"ContainerStarted","Data":"70fc40e6f6c7263834206245f7aa6fdbc7f676280152de3526726d7fa2c1d246"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.889584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.895837 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podStartSLOduration=3.708109892 podStartE2EDuration="42.895811363s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.622379721 +0000 UTC m=+1024.118100653" lastFinishedPulling="2026-02-18 14:17:10.810081202 +0000 UTC m=+1063.305802124" observedRunningTime="2026-02-18 14:17:11.857524933 +0000 UTC m=+1064.353245855" watchObservedRunningTime="2026-02-18 14:17:11.895811363 +0000 UTC m=+1064.391532285" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.903762 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" event={"ID":"b1d0315e-6ccb-4c6a-a488-98454bb41358","Type":"ContainerStarted","Data":"019a3b57c7d20066dba4b4a096ca0f3a1ce0be4c39737340f2359542e6a19f7e"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.913163 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.914912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.926015 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" event={"ID":"209f2e6c-29e9-444b-b14a-10eadb782a59","Type":"ContainerStarted","Data":"7e811048dfbc56ead3937cec2dfe2257a0fa6bfe212eafd482c1c40c61d7c7ad"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.927359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.935117 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podStartSLOduration=3.359675139 podStartE2EDuration="42.935090757s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:30.988649093 +0000 UTC m=+1023.484370015" lastFinishedPulling="2026-02-18 14:17:10.564064711 +0000 UTC m=+1063.059785633" observedRunningTime="2026-02-18 14:17:11.878120884 +0000 UTC m=+1064.373841806" watchObservedRunningTime="2026-02-18 14:17:11.935090757 +0000 UTC m=+1064.430811689" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.942182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" event={"ID":"3b114d0a-837c-4f0c-b02a-db694bdab362","Type":"ContainerStarted","Data":"eef5964af327ffa966bc134cce5ebd8a7f9bba7dd29db4d1f64ad4224a5ee859"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.943476 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.961549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podStartSLOduration=7.415659863 podStartE2EDuration="42.961527173s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.038612219 +0000 UTC m=+1025.534333141" lastFinishedPulling="2026-02-18 14:17:08.584479529 +0000 UTC m=+1061.080200451" observedRunningTime="2026-02-18 14:17:11.911877151 +0000 UTC m=+1064.407598073" watchObservedRunningTime="2026-02-18 14:17:11.961527173 +0000 UTC m=+1064.457248105" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.992533 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podStartSLOduration=5.08485234 podStartE2EDuration="42.992509221s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.039239705 +0000 UTC m=+1025.534960627" lastFinishedPulling="2026-02-18 14:17:10.946896586 +0000 UTC m=+1063.442617508" observedRunningTime="2026-02-18 14:17:11.945999458 +0000 UTC m=+1064.441720390" watchObservedRunningTime="2026-02-18 14:17:11.992509221 +0000 UTC m=+1064.488230163" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.028489 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podStartSLOduration=3.9555249569999997 podStartE2EDuration="43.028466573s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.616035773 +0000 UTC m=+1024.111756695" lastFinishedPulling="2026-02-18 14:17:10.688977389 +0000 UTC m=+1063.184698311" observedRunningTime="2026-02-18 14:17:11.980710359 +0000 UTC m=+1064.476431281" watchObservedRunningTime="2026-02-18 14:17:12.028466573 +0000 UTC m=+1064.524187495" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.044981 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podStartSLOduration=42.044962592 podStartE2EDuration="42.044962592s" podCreationTimestamp="2026-02-18 14:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:17:12.029124069 +0000 UTC m=+1064.524844991" watchObservedRunningTime="2026-02-18 14:17:12.044962592 +0000 UTC m=+1064.540683514" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.075155 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podStartSLOduration=4.902543198 podStartE2EDuration="43.07512062s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.597686063 +0000 UTC m=+1025.093406985" lastFinishedPulling="2026-02-18 14:17:10.770263485 +0000 UTC m=+1063.265984407" observedRunningTime="2026-02-18 14:17:12.072416573 +0000 UTC m=+1064.568137505" watchObservedRunningTime="2026-02-18 14:17:12.07512062 +0000 UTC m=+1064.570841542" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.097166 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podStartSLOduration=5.135814994 podStartE2EDuration="43.097143657s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.604184964 +0000 UTC m=+1025.099905886" lastFinishedPulling="2026-02-18 14:17:10.565513617 +0000 UTC m=+1063.061234549" observedRunningTime="2026-02-18 14:17:12.092193024 +0000 UTC m=+1064.587913966" watchObservedRunningTime="2026-02-18 14:17:12.097143657 +0000 UTC m=+1064.592864579" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.137672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podStartSLOduration=9.193192032 podStartE2EDuration="43.137648931s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.553610029 +0000 UTC m=+1025.049330951" lastFinishedPulling="2026-02-18 14:17:06.498066928 +0000 UTC m=+1058.993787850" observedRunningTime="2026-02-18 14:17:12.131631092 +0000 UTC m=+1064.627352034" watchObservedRunningTime="2026-02-18 14:17:12.137648931 +0000 UTC m=+1064.633369853" Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.971349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" event={"ID":"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1","Type":"ContainerStarted","Data":"c45220d8814c6c0c18e7cb1262ae8722e0667ae6a5b7a51a97cefb8c990e668f"} Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.972165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.973707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" event={"ID":"ac911184-3930-4f7e-9d77-2cc9e7262ea6","Type":"ContainerStarted","Data":"94b685f65defd14ff085edc07cf16a6c9eac5af5a9242e2062a105e29adfcadd"} Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.973951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:17:15 crc kubenswrapper[4739]: I0218 14:17:15.005092 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podStartSLOduration=5.021651883 podStartE2EDuration="46.005067415s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.038941268 +0000 UTC m=+1025.534662190" lastFinishedPulling="2026-02-18 14:17:14.0223568 +0000 UTC m=+1066.518077722" observedRunningTime="2026-02-18 14:17:14.990797391 +0000 UTC m=+1067.486518313" watchObservedRunningTime="2026-02-18 14:17:15.005067415 +0000 UTC m=+1067.500788337" Feb 18 14:17:15 crc kubenswrapper[4739]: I0218 14:17:15.024093 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podStartSLOduration=5.138166282 podStartE2EDuration="46.024071736s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.085721398 +0000 UTC m=+1025.581442320" lastFinishedPulling="2026-02-18 14:17:13.971626852 +0000 UTC m=+1066.467347774" observedRunningTime="2026-02-18 14:17:15.020695502 +0000 UTC m=+1067.516416434" watchObservedRunningTime="2026-02-18 14:17:15.024071736 +0000 UTC m=+1067.519792678" Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.995971 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a"} Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.996320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.998008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" event={"ID":"b1d0315e-6ccb-4c6a-a488-98454bb41358","Type":"ContainerStarted","Data":"309a37ef33b46c2c50248e60dfb4f49973997b9bd9dabd1e8850b219370a129e"} Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.998166 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:17:17 crc kubenswrapper[4739]: I0218 14:17:17.033962 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podStartSLOduration=42.899291999 podStartE2EDuration="48.033935459s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:17:11.285661458 +0000 UTC m=+1063.781382380" lastFinishedPulling="2026-02-18 14:17:16.420304918 +0000 UTC m=+1068.916025840" observedRunningTime="2026-02-18 14:17:17.027687314 +0000 UTC m=+1069.523408236" watchObservedRunningTime="2026-02-18 14:17:17.033935459 +0000 UTC m=+1069.529656401" Feb 18 14:17:17 crc kubenswrapper[4739]: I0218 14:17:17.055114 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podStartSLOduration=43.011492973 podStartE2EDuration="48.055088294s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:17:11.383122006 +0000 UTC m=+1063.878842928" lastFinishedPulling="2026-02-18 14:17:16.426717327 +0000 UTC m=+1068.922438249" observedRunningTime="2026-02-18 14:17:17.054792906 +0000 UTC m=+1069.550513838" watchObservedRunningTime="2026-02-18 14:17:17.055088294 +0000 UTC m=+1069.550809246" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.017991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" event={"ID":"19470a60-c796-4a28-a0e2-65b50fa94ea6","Type":"ContainerStarted","Data":"0e42e02d4125cc15a13435836d7862436df0ae98370b7a452960e4147e247a5c"} Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.629534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.655060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.668677 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.831020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b"} Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029363 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029636 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.045172 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podStartSLOduration=4.912588097 podStartE2EDuration="51.045152819s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.009379674 +0000 UTC m=+1025.505100596" lastFinishedPulling="2026-02-18 14:17:19.141944406 +0000 UTC m=+1071.637665318" observedRunningTime="2026-02-18 14:17:20.042600306 +0000 UTC m=+1072.538321248" watchObservedRunningTime="2026-02-18 14:17:20.045152819 +0000 UTC m=+1072.540873741" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.062832 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podStartSLOduration=3.919557637 podStartE2EDuration="51.062812478s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.667207043 +0000 UTC m=+1024.162927965" lastFinishedPulling="2026-02-18 14:17:18.810461884 +0000 UTC m=+1071.306182806" observedRunningTime="2026-02-18 14:17:20.055534607 +0000 UTC m=+1072.551255529" watchObservedRunningTime="2026-02-18 14:17:20.062812478 +0000 UTC m=+1072.558533390" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.100076 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.111180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.156298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.307348 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.626303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.006308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.037305 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.042032 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" event={"ID":"538f0d59-9eea-4f76-a310-f7f724593a1e","Type":"ContainerStarted","Data":"79708c0971e70628d1b238bab729e895a79618886caaafc889eed9311e875037"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.042217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.044347 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" event={"ID":"60bad312-a989-43d1-87e6-6c6f10d1ae8f","Type":"ContainerStarted","Data":"ae95d143ffd19524bc2f0012ed6fc8f8a0f41849bc152802007e707635b34cd9"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.044582 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.045916 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" event={"ID":"8336a5f7-2ff0-440a-88b0-a6ab51692965","Type":"ContainerStarted","Data":"b7d5ac4594945191586b7fa6b5eb9940f8353711f6663591b40371cae7064c56"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.090344 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podStartSLOduration=5.083365445 podStartE2EDuration="52.090329255s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.33817492 +0000 UTC m=+1025.833895842" lastFinishedPulling="2026-02-18 14:17:20.34513873 +0000 UTC m=+1072.840859652" observedRunningTime="2026-02-18 14:17:21.088715304 +0000 UTC m=+1073.584436226" watchObservedRunningTime="2026-02-18 14:17:21.090329255 +0000 UTC m=+1073.586050177" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.106780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podStartSLOduration=4.200252639 podStartE2EDuration="52.106762982s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.616755046 +0000 UTC m=+1025.112475968" lastFinishedPulling="2026-02-18 14:17:20.523265389 +0000 UTC m=+1073.018986311" observedRunningTime="2026-02-18 14:17:21.103271356 +0000 UTC m=+1073.598992288" watchObservedRunningTime="2026-02-18 14:17:21.106762982 +0000 UTC m=+1073.602483904" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.134946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podStartSLOduration=4.28652829 podStartE2EDuration="52.134920961s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.002661158 +0000 UTC m=+1025.498382080" lastFinishedPulling="2026-02-18 14:17:20.851053839 +0000 UTC m=+1073.346774751" observedRunningTime="2026-02-18 14:17:21.123748804 +0000 UTC m=+1073.619469726" watchObservedRunningTime="2026-02-18 14:17:21.134920961 +0000 UTC m=+1073.630641903" Feb 18 14:17:22 crc kubenswrapper[4739]: I0218 14:17:22.588415 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.071780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" event={"ID":"40be8fff-51f0-467a-aca5-517e02eea23b","Type":"ContainerStarted","Data":"683f1a0cb3d323ab64501f029c46a596f10b1e3cdb67aa24d85f590ebb041579"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.072547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.074852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" event={"ID":"06163b75-4f40-42a0-83d8-70c935b9172c","Type":"ContainerStarted","Data":"9f2624b4d098577f1d1f21dcd591e0fbf59f2207a8e12521154aa447bfb715be"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.077142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" event={"ID":"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682","Type":"ContainerStarted","Data":"06f4cd242b305b5c897a9f466c332305032898fdc501afc63b66c8d18af6c3b3"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.077427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.095413 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podStartSLOduration=4.396738472 podStartE2EDuration="55.095394432s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.324849945 +0000 UTC m=+1024.820570867" lastFinishedPulling="2026-02-18 14:17:23.023505905 +0000 UTC m=+1075.519226827" observedRunningTime="2026-02-18 14:17:24.090155092 +0000 UTC m=+1076.585876004" watchObservedRunningTime="2026-02-18 14:17:24.095394432 +0000 UTC m=+1076.591115354" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.113806 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podStartSLOduration=4.824085233 podStartE2EDuration="55.113782728s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.538142576 +0000 UTC m=+1025.033863498" lastFinishedPulling="2026-02-18 14:17:22.827840061 +0000 UTC m=+1075.323560993" observedRunningTime="2026-02-18 14:17:24.10619841 +0000 UTC m=+1076.601919332" watchObservedRunningTime="2026-02-18 14:17:24.113782728 +0000 UTC m=+1076.609503670" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.134570 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podStartSLOduration=4.579704585 podStartE2EDuration="54.134542663s" podCreationTimestamp="2026-02-18 14:16:30 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.469701322 +0000 UTC m=+1025.965422244" lastFinishedPulling="2026-02-18 14:17:23.02453941 +0000 UTC m=+1075.520260322" observedRunningTime="2026-02-18 14:17:24.12230518 +0000 UTC m=+1076.618026112" watchObservedRunningTime="2026-02-18 14:17:24.134542663 +0000 UTC m=+1076.630263575" Feb 18 14:17:25 crc kubenswrapper[4739]: E0218 14:17:25.411673 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:17:25 crc kubenswrapper[4739]: I0218 14:17:25.588341 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:17:26 crc kubenswrapper[4739]: I0218 14:17:26.445291 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.373076 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.373418 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.769700 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.849084 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.908915 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.279951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.626105 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.670941 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.679262 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.954423 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.192667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c"} Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.193529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.210969 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podStartSLOduration=4.8019964139999995 podStartE2EDuration="1m10.210951728s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.470695287 +0000 UTC m=+1025.966416209" lastFinishedPulling="2026-02-18 14:17:38.879650601 +0000 UTC m=+1091.375371523" observedRunningTime="2026-02-18 14:17:39.207255957 +0000 UTC m=+1091.702976879" watchObservedRunningTime="2026-02-18 14:17:39.210951728 +0000 UTC m=+1091.706672640" Feb 18 14:17:51 crc kubenswrapper[4739]: I0218 14:17:51.063565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.372947 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.373367 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.373408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.374121 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.374169 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" gracePeriod=600 Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.384607 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" exitCode=0 Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.384685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.385981 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.386075 4739 scope.go:117] "RemoveContainer" containerID="808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.424732 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.427938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.431970 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432055 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432177 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432177 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jdmzz" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.437052 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.440934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.440998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.499957 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.505174 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.507349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.522759 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542986 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.544387 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.561400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.645191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.645561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.663023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.758199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.822122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.305664 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.424436 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:09 crc kubenswrapper[4739]: W0218 14:18:09.428907 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa473d6_d18d_484f_ae1e_8691ed20efa1.slice/crio-c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d WatchSource:0}: Error finding container c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d: Status 404 returned error can't find the container with id c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.501434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" event={"ID":"1a5000d3-4c10-42f8-9912-1fa1628fd929","Type":"ContainerStarted","Data":"4808e9e85e6feee30fab77e12dbad19f1e8587e014af2fadd4de7f34a6f67e25"} Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.502846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" event={"ID":"eaa473d6-d18d-484f-ae1e-8691ed20efa1","Type":"ContainerStarted","Data":"c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d"} Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.249542 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.289692 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.291147 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.308246 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.514732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.518040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.549972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.620786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.628514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.672044 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.673672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.706827 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.824059 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.824142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.825407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.928795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.934750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.959229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.081873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.174594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.438966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.447308 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451387 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451766 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451879 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451902 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.452070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.458217 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.460658 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bkpbw" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.472436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.474771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.528414 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.533751 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.577492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.594011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerStarted","Data":"818a67c85ce926301db3afa89b1bb5c3ac9bbdbced8966f71ba1d63af4f883cc"} Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.599348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667696 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668166 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668809 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.669033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.670979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672034 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673006 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673348 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.676291 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775742 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776497 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776739 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776796 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.781529 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.782221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.782421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.786032 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.789818 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.790198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.791101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.797397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.798396 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.800239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.800673 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802133 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802202 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1542ad1e95f6d05e9b33a4f8791d4ee2fe2b5bce9c9209ea9b163f0535bf4310/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802268 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.808671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.808915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.812234 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.812756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.814324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.816292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820508 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820554 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e4a135f402bfdd87a0dd9dc00d6afd10d61dd6559041546aff07ddf4aa84ac2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.821790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.824800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.830183 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.830669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845896 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845963 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/42f2352e597643fb9091206ae40b48fcb025360f730dba5ba00ebee7f81842b7/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.847676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.855748 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.864397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.874940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875171 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875307 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875542 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875801 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875930 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvn4l" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.876090 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.885358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.892016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.901542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.941307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.957504 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982280 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.983039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.983114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085144 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087051 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089933 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089967 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b4e22e9c66b4b9e31fc01977dfa2f505609dd5b0e95d61de241c54ade9d7a505/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.091212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.092384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.093323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.094763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.110745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.113585 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.157414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.213608 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.284207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.642560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"8bde76f9b97130d02eb6cd439713bddac781417cc738a4a05c1874baac5770d7"} Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.945183 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.947022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2snlj" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957602 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957761 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.958264 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.961536 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.970237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019909 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122346 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122491 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.123934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.126680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.127942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.128231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.136917 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.136973 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8876dd33e35d37c7675be2db671fde3d51837d411544d5fae18d0a50fb274985/globalmount\"" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.140108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.145024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.156816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.306054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.594574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.355376 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.360652 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365061 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zbswg" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365386 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.385648 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.463877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.463989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.465122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.465965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.567754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.568090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.568192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.571091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.571115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.572154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.572707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.577887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.587602 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.587739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.589653 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.594964 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.595309 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.595422 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f7aab65c980fc379d7c82b79c526e7d4095da6614b07787895a3f513563c855/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.596379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.597328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zvx9p" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.603223 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.644892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673881 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673923 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.704510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.778719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.780284 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.794281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.804698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.819871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.008181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.057199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.474800 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.475298 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.497199 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.505667 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacc9bbc5_8705_410b_977b_ca9245834e36.slice/crio-15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0 WatchSource:0}: Error finding container 15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0: Status 404 returned error can't find the container with id 15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0 Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.531848 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846b1cf2_bffb_4eca_a8f2_f3c0fcc7ac4b.slice/crio-a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b WatchSource:0}: Error finding container a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b: Status 404 returned error can't find the container with id a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.540781 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.547268 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70500a97_2717_4761_884a_25cf8ab89380.slice/crio-6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec WatchSource:0}: Error finding container 6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec: Status 404 returned error can't find the container with id 6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.570507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.597931 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5594aaa_fab3_4dad_b79e_17200bc2f1ee.slice/crio-95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8 WatchSource:0}: Error finding container 95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8: Status 404 returned error can't find the container with id 95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8 Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.713688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.726191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.728701 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.730043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.731524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"d4d2f4d954b6b105d9d4d012df3327d247d4b0d91bb0c3076d3bbe9f637b4cc0"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.850883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.859807 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869aa11b_eba7_4598_90dc_d50c642b9120.slice/crio-755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf WatchSource:0}: Error finding container 755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf: Status 404 returned error can't find the container with id 755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.077850 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.771007 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf"} Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.774517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39286c8b-55e8-41a2-9f36-a7ce475e8313","Type":"ContainerStarted","Data":"0eb41db429ddb736d60791618a1381bad01ee13af0c05c50d21ae73ca7a4d49c"} Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.316695 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.318244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.320835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8hmh8" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.348633 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.461681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.572763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.655250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.950082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.367724 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.370011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.381798 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-l487j" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.381984 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.408484 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.530687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.530823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.639353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.639657 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: E0218 14:18:19.639931 4739 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 18 14:18:19 crc kubenswrapper[4739]: E0218 14:18:19.640007 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert podName:7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b nodeName:}" failed. No retries permitted until 2026-02-18 14:18:20.139982084 +0000 UTC m=+1132.635703006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert") pod "observability-ui-dashboards-66cbf594b5-m5hn7" (UID: "7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b") : secret "observability-ui-dashboards" not found Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.681505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.885091 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.887479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.919000 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.927566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.955874 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990884 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.991392 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.991746 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.998073 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nz745" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.011989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.048798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.048872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049264 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.051526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.151963 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152222 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152560 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.153475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.153818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.155632 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.162288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.175288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.175836 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.176471 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.186844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.208399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.215517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.221524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.222360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.243497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.255411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.290892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.301537 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.301763 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01cfb519e92c9e23501f00a5b6c703ca97cb1b944d5fe5c6aa349ce505ad2fe2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.338922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.400867 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.532780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.640010 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.963589 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.965854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.970973 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nvrtf" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.971619 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.971858 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.979883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.051615 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.053952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.065841 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115485 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115654 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115720 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218586 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218729 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.219798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220848 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.221027 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.221271 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.223972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.227214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.241030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.249419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.273493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.318155 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.394514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.493319 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.494864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-24fl6" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500776 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.502491 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.507837 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.511057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641651 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744844 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.745058 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.746932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.748062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.748119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.749700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.752696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.755639 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.755681 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c43fcf5985af0c5e34aea6c044b6fe94957dce2fb6216756fe3ecd427fa83e65/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.773171 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.773642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.826001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.835156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.018127 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.020264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.025113 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.025364 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.027991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2djtj" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.028276 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.046638 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138741 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241174 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241387 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.242917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.242940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.243217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248132 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248632 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248661 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ad264258ca460ea0cafe0fa90875c9c3a404027f6d2571fa7d126eda6292dab/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.267501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.282185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.285896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.350903 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.398401 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.399273 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf5kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f34a572d-30ca-4de5-bf27-3371e1e9d197): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.400683 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.409083 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.409336 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqscd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(70500a97-2717-4761-884a-25cf8ab89380): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.411295 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" Feb 18 14:18:39 crc kubenswrapper[4739]: I0218 14:18:39.848289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:40 crc kubenswrapper[4739]: E0218 14:18:40.108880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" Feb 18 14:18:40 crc kubenswrapper[4739]: E0218 14:18:40.109643 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.924162 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.924344 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2njg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(acc9bbc5-8705-410b-977b-ca9245834e36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.925685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.961295 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.961541 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h92gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(a5594aaa-fab3-4dad-b79e-17200bc2f1ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.966651 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.122824 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.122832 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.226867 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.227077 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbxbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.228231 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" Feb 18 14:18:42 crc kubenswrapper[4739]: I0218 14:18:42.241869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.391587 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.391779 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(869aa11b-eba7-4598-90dc-d50c642b9120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.393843 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.134640 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.134646 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.195636 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.195942 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5b4h59ch77hd4h658h675h59bh589h5dbh65chc8hf8h574h5b9h7bh88h78h689hc8h59fh686h5c5h68fh697h544h596h5c4h5d8h678h684hdh7fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(39286c8b-55e8-41a2-9f36-a7ce475e8313): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.197746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="39286c8b-55e8-41a2-9f36-a7ce475e8313" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.134823 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.135531 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrtrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xpfnx_openstack(1a5000d3-4c10-42f8-9912-1fa1628fd929): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.136714 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" podUID="1a5000d3-4c10-42f8-9912-1fa1628fd929" Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.148177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p" event={"ID":"7289493d-f197-436b-bc45-84721d12c034","Type":"ContainerStarted","Data":"f7b528ec5bf80240e768104dd19a13182dfb81fde383ba626533cbd10bfda010"} Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.151202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"f97314f9f73b65ab6d585d1190d55be82b1924ce7010a229a6c53d15da07f316"} Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.152641 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="39286c8b-55e8-41a2-9f36-a7ce475e8313" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.251289 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.251811 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcpzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-q9846_openstack(d3e2e1a1-a8f7-47c1-9964-399a7d9837fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.253998 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.342862 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.343020 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrjt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7xg2n_openstack(eaa473d6-d18d-484f-ae1e-8691ed20efa1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.344402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" podUID="eaa473d6-d18d-484f-ae1e-8691ed20efa1" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.535333 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.535682 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gm6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-c68ds_openstack(6be5923f-70ed-45b5-a747-d4008eaeb656): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.537157 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.919846 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.944298 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.046057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.165569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9f98d489-4zk5t" event={"ID":"39496c01-fddc-4d5c-8c1a-32af402a87cd","Type":"ContainerStarted","Data":"0f08196eba7ddd3d1a29a2e9ff2f40c7cf5486a0d373a5269798df70b00991dd"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.165626 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9f98d489-4zk5t" event={"ID":"39496c01-fddc-4d5c-8c1a-32af402a87cd","Type":"ContainerStarted","Data":"d48d43d75d9d6ea021fafb66b5ab83ecad75207522f3ea950644f9290946fe01"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.167016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" event={"ID":"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b","Type":"ContainerStarted","Data":"8b24603fbb6613a07858086dba4e21c9e54933a795d6b41a1e0a25ca445d072c"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.168755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerStarted","Data":"2bc5886939c37fb1062674e7d0eff4b81f7f7a7b2294e0f4745de8bbbca3ba11"} Feb 18 14:18:45 crc kubenswrapper[4739]: E0218 14:18:45.171417 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" Feb 18 14:18:45 crc kubenswrapper[4739]: E0218 14:18:45.171520 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.688132 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.742004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"1a5000d3-4c10-42f8-9912-1fa1628fd929\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.742242 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"1a5000d3-4c10-42f8-9912-1fa1628fd929\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.744377 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config" (OuterVolumeSpecName: "config") pod "1a5000d3-4c10-42f8-9912-1fa1628fd929" (UID: "1a5000d3-4c10-42f8-9912-1fa1628fd929"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.751356 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh" (OuterVolumeSpecName: "kube-api-access-rrtrh") pod "1a5000d3-4c10-42f8-9912-1fa1628fd929" (UID: "1a5000d3-4c10-42f8-9912-1fa1628fd929"). InnerVolumeSpecName "kube-api-access-rrtrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.824679 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.857996 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.858030 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.867552 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:45 crc kubenswrapper[4739]: W0218 14:18:45.879061 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d6d7ab5_2170_48ba_b9bf_40da1ab8fdf7.slice/crio-af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4 WatchSource:0}: Error finding container af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4: Status 404 returned error can't find the container with id af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4 Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.960029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.960127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config" (OuterVolumeSpecName: "config") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.961278 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.961302 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.962996 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9" (OuterVolumeSpecName: "kube-api-access-vrjt9") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "kube-api-access-vrjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.065687 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.182634 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" event={"ID":"1a5000d3-4c10-42f8-9912-1fa1628fd929","Type":"ContainerDied","Data":"4808e9e85e6feee30fab77e12dbad19f1e8587e014af2fadd4de7f34a6f67e25"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.182744 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.184769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.187342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" event={"ID":"eaa473d6-d18d-484f-ae1e-8691ed20efa1","Type":"ContainerDied","Data":"c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.187359 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.228242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b9f98d489-4zk5t" podStartSLOduration=27.22822313 podStartE2EDuration="27.22822313s" podCreationTimestamp="2026-02-18 14:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:18:46.217687637 +0000 UTC m=+1158.713408579" watchObservedRunningTime="2026-02-18 14:18:46.22822313 +0000 UTC m=+1158.723944062" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.274671 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.295215 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.317472 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.328882 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.425874 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5000d3-4c10-42f8-9912-1fa1628fd929" path="/var/lib/kubelet/pods/1a5000d3-4c10-42f8-9912-1fa1628fd929/volumes" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.426278 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa473d6-d18d-484f-ae1e-8691ed20efa1" path="/var/lib/kubelet/pods/eaa473d6-d18d-484f-ae1e-8691ed20efa1/volumes" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.750284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.877304 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:49 crc kubenswrapper[4739]: W0218 14:18:49.775735 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c434ad_eea8_4896_b65d_26eb1ca89f84.slice/crio-cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd WatchSource:0}: Error finding container cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd: Status 404 returned error can't find the container with id cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd Feb 18 14:18:49 crc kubenswrapper[4739]: W0218 14:18:49.776779 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22289461_6c53_461c_adfe_0f1cd7209928.slice/crio-b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2 WatchSource:0}: Error finding container b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2: Status 404 returned error can't find the container with id b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2 Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.220497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd"} Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.221309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2"} Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.533661 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.533977 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.539232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:51 crc kubenswrapper[4739]: I0218 14:18:51.236562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:51 crc kubenswrapper[4739]: I0218 14:18:51.311267 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:18:53 crc kubenswrapper[4739]: I0218 14:18:53.251855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.299324 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" event={"ID":"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b","Type":"ContainerStarted","Data":"82f79c47c38249a0f8113aec3b2167eaf251f56ebb97ba41b8f99a34053dde50"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.301324 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.303321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.304811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"c6681fe2af2fd55098cbbf5b2d0e052ee1979e3c98d9703e64c6493aa37790da"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.320951 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" podStartSLOduration=28.452850575 podStartE2EDuration="38.320928626s" podCreationTimestamp="2026-02-18 14:18:19 +0000 UTC" firstStartedPulling="2026-02-18 14:18:45.05815156 +0000 UTC m=+1157.553872482" lastFinishedPulling="2026-02-18 14:18:54.926229601 +0000 UTC m=+1167.421950533" observedRunningTime="2026-02-18 14:18:57.314935254 +0000 UTC m=+1169.810656196" watchObservedRunningTime="2026-02-18 14:18:57.320928626 +0000 UTC m=+1169.816649558" Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.316800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.319621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.321530 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p" event={"ID":"7289493d-f197-436b-bc45-84721d12c034","Type":"ContainerStarted","Data":"fffe676cfab2c2f4a606a064d4ca13a07363cc63779d67c105c5b541004a6e8a"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.321690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.323499 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"ead5562d421aaba5060c11cf9e9f5c887782f5703e83601e3c750ce7f7961098"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.327304 4739 generic.go:334] "Generic (PLEG): container finished" podID="3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7" containerID="410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1" exitCode=0 Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.329158 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerDied","Data":"410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.368463 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p" podStartSLOduration=27.714333302 podStartE2EDuration="38.368421266s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.108411847 +0000 UTC m=+1156.604132769" lastFinishedPulling="2026-02-18 14:18:54.762499811 +0000 UTC m=+1167.258220733" observedRunningTime="2026-02-18 14:18:58.364819084 +0000 UTC m=+1170.860540006" watchObservedRunningTime="2026-02-18 14:18:58.368421266 +0000 UTC m=+1170.864142188" Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.342318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.346683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"8827372c966e9288064ea8d3b3f6ec236d747758df5699891a9db62b6e833265"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.350514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.354001 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerStarted","Data":"854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.354983 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.358381 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerID="cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb" exitCode=0 Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.359750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.427199 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=29.956467544 podStartE2EDuration="41.427170414s" podCreationTimestamp="2026-02-18 14:18:18 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.930167092 +0000 UTC m=+1157.425888014" lastFinishedPulling="2026-02-18 14:18:56.400869962 +0000 UTC m=+1168.896590884" observedRunningTime="2026-02-18 14:18:59.409934925 +0000 UTC m=+1171.905655867" watchObservedRunningTime="2026-02-18 14:18:59.427170414 +0000 UTC m=+1171.922891346" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.369823 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac" exitCode=0 Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.369912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.373509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.375394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39286c8b-55e8-41a2-9f36-a7ce475e8313","Type":"ContainerStarted","Data":"edf3147b8d3130f9675e86b1307940f68245f8d8af9ed1e99164984560a1a39b"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.376698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.380735 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerStarted","Data":"3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.381284 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.382903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"6cd36b7a4f4aa4fe88020c9da4998dddd480cafb409fd3536e4cea2f42464a7f"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.385382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"90b114409ae0c12df7f5e3c2d0abb3dcbc6832e00511c218de385692da1a3738"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.386024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.386130 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.389042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"b9d0c9e1cda0978464ffa7aad3ccc13df307b6e1e6e4de19f5cdf27549033bcd"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.424546 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podStartSLOduration=3.6371383980000003 podStartE2EDuration="49.424524667s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:12.239484489 +0000 UTC m=+1124.735205411" lastFinishedPulling="2026-02-18 14:18:58.026870768 +0000 UTC m=+1170.522591680" observedRunningTime="2026-02-18 14:19:00.417320604 +0000 UTC m=+1172.913041546" watchObservedRunningTime="2026-02-18 14:19:00.424524667 +0000 UTC m=+1172.920245599" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.479674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.503330262 podStartE2EDuration="37.479649971s" podCreationTimestamp="2026-02-18 14:18:23 +0000 UTC" firstStartedPulling="2026-02-18 14:18:49.862865349 +0000 UTC m=+1162.358586491" lastFinishedPulling="2026-02-18 14:18:59.839185258 +0000 UTC m=+1172.334906200" observedRunningTime="2026-02-18 14:19:00.446734463 +0000 UTC m=+1172.942455395" watchObservedRunningTime="2026-02-18 14:19:00.479649971 +0000 UTC m=+1172.975370893" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.504990 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.487702014 podStartE2EDuration="40.504960836s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:49.863828173 +0000 UTC m=+1162.359549095" lastFinishedPulling="2026-02-18 14:18:59.881086985 +0000 UTC m=+1172.376807917" observedRunningTime="2026-02-18 14:19:00.501730844 +0000 UTC m=+1172.997451776" watchObservedRunningTime="2026-02-18 14:19:00.504960836 +0000 UTC m=+1173.000681758" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.530263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5cglq" podStartSLOduration=31.486322315 podStartE2EDuration="40.53024281s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:45.882117991 +0000 UTC m=+1158.377838913" lastFinishedPulling="2026-02-18 14:18:54.926038486 +0000 UTC m=+1167.421759408" observedRunningTime="2026-02-18 14:19:00.522307758 +0000 UTC m=+1173.018028690" watchObservedRunningTime="2026-02-18 14:19:00.53024281 +0000 UTC m=+1173.025963732" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.553486 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.761501492 podStartE2EDuration="45.553459471s" podCreationTimestamp="2026-02-18 14:18:15 +0000 UTC" firstStartedPulling="2026-02-18 14:18:17.160384441 +0000 UTC m=+1129.656105353" lastFinishedPulling="2026-02-18 14:18:59.9523424 +0000 UTC m=+1172.448063332" observedRunningTime="2026-02-18 14:19:00.546987746 +0000 UTC m=+1173.042708698" watchObservedRunningTime="2026-02-18 14:19:00.553459471 +0000 UTC m=+1173.049180413" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.835904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.875840 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.351341 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.400669 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.403364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5"} Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.403410 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.405101 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.417314 4739 generic.go:334] "Generic (PLEG): container finished" podID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerID="fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5" exitCode=0 Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.426808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5"} Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.471458 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.471876 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.817503 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.913140 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.916146 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.931507 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.004056 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.010643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.016085 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030493 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031044 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.053562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.092036 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.111731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.113330 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116562 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116859 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.117055 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cn9lh" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.120806 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132857 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133002 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.134682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.135232 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136069 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136307 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" containerID="cri-o://3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" gracePeriod=10 Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.144349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.153822 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.156357 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.158002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.159282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.165067 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.174537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.234955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235001 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235147 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235170 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235520 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.237271 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.238477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.241593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.255379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.271141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.338483 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.340106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.340988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.342838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.351238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.413858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.414257 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.414866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.416375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.419432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.520163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.542700 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.906079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: W0218 14:19:03.916784 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3866887c_44e3_4436_bd88_bbc56f572f77.slice/crio-01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515 WatchSource:0}: Error finding container 01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515: Status 404 returned error can't find the container with id 01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.174363 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.277598 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.286898 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.450877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.451142 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" containerID="cri-o://9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" gracePeriod=10 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.451533 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.454738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q6g47" event={"ID":"8daa97ee-3449-4043-8218-71aaa601c37c","Type":"ContainerStarted","Data":"b0fb7507bd04ddb20fc9d1843f66653d61547463570194834cee73f2779dcc6b"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.457750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerStarted","Data":"01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.458703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"6c0344dcd1980d3e621d946739f4b13130dbeab96724b311a0270793512ebb0c"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.460614 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerID="3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" exitCode=0 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.460707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.461691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"4bb9d2508f342a005d4553e95f9b8ae69a3950ee2fff78abc67f8fbc5d7c9871"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.473266 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podStartSLOduration=-9223371983.381535 podStartE2EDuration="53.473240491s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:12.682473129 +0000 UTC m=+1125.178194051" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:04.469928146 +0000 UTC m=+1176.965649088" watchObservedRunningTime="2026-02-18 14:19:04.473240491 +0000 UTC m=+1176.968961413" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.223614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.314756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.314926 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.315001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.338970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x" (OuterVolumeSpecName: "kube-api-access-9gm6x") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "kube-api-access-9gm6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.369036 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config" (OuterVolumeSpecName: "config") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.378661 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418828 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418874 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418891 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.472246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"818a67c85ce926301db3afa89b1bb5c3ac9bbdbced8966f71ba1d63af4f883cc"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474924 4739 scope.go:117] "RemoveContainer" containerID="3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474879 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.477751 4739 generic.go:334] "Generic (PLEG): container finished" podID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerID="9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" exitCode=0 Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.477830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.479739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q6g47" event={"ID":"8daa97ee-3449-4043-8218-71aaa601c37c","Type":"ContainerStarted","Data":"c8b0788260d81388963cdc086497eb2881ef21cffe9b4a2c4758d7b22d5d9820"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.481500 4739 generic.go:334] "Generic (PLEG): container finished" podID="3866887c-44e3-4436-bd88-bbc56f572f77" containerID="edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d" exitCode=0 Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.481544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.502725 4739 scope.go:117] "RemoveContainer" containerID="cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.527820 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.547980 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:06 crc kubenswrapper[4739]: I0218 14:19:06.059629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 14:19:06 crc kubenswrapper[4739]: I0218 14:19:06.247033 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.423336 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" path="/var/lib/kubelet/pods/6be5923f-70ed-45b5-a747-d4008eaeb656/volumes" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442956 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.453577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg" (OuterVolumeSpecName: "kube-api-access-tcpzg") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "kube-api-access-tcpzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.502119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config" (OuterVolumeSpecName: "config") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"8bde76f9b97130d02eb6cd439713bddac781417cc738a4a05c1874baac5770d7"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507475 4739 scope.go:117] "RemoveContainer" containerID="9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.515262 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.520167 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerID="f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d" exitCode=0 Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.520251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556160 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556187 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556197 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.560119 4739 scope.go:117] "RemoveContainer" containerID="fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.629396 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q6g47" podStartSLOduration=4.629327718 podStartE2EDuration="4.629327718s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:06.571579177 +0000 UTC m=+1179.067300119" watchObservedRunningTime="2026-02-18 14:19:06.629327718 +0000 UTC m=+1179.125048640" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.866241 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.875805 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.535526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerStarted","Data":"81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.535909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.560661 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podStartSLOduration=5.560642269 podStartE2EDuration="5.560642269s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:07.554146594 +0000 UTC m=+1180.049867546" watchObservedRunningTime="2026-02-18 14:19:07.560642269 +0000 UTC m=+1180.056363191" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.423253 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" path="/var/lib/kubelet/pods/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc/volumes" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.548374 4739 generic.go:334] "Generic (PLEG): container finished" podID="acc9bbc5-8705-410b-977b-ca9245834e36" containerID="874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9" exitCode=0 Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.548465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerDied","Data":"874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9"} Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.554261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb"} Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.813018 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837235 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837610 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837622 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837637 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837642 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837658 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837664 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837686 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837848 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837867 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.838828 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.863237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.956963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009895 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113757 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.114380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.114539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.139867 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.168134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.563400 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" containerID="cri-o://81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" gracePeriod=10 Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.666116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:09 crc kubenswrapper[4739]: W0218 14:19:09.673494 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1ac31ff_21d1_41d9_9b77_15e64a2cd5f0.slice/crio-d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17 WatchSource:0}: Error finding container d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17: Status 404 returned error can't find the container with id d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17 Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.093357 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.100958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103483 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103629 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103634 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103645 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-l5wd5" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.120651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240758 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.241047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.241083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343666 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344225 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344271 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344334 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:10.844311021 +0000 UTC m=+1183.340032023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.344871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.344986 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.352768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.365866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.366380 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.366434 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/829f65a67044aa26f8514bb78b3970abc3028c65012918f695be6c1b9f081038/globalmount\"" pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.407270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.573138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerStarted","Data":"d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17"} Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.855351 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.855760 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.855851 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.856010 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:11.855987884 +0000 UTC m=+1184.351708826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: I0218 14:19:11.882079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882743 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882802 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882855 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:13.882835139 +0000 UTC m=+1186.378556061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.353142 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.725101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.731108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.733744 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.733753 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.734189 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.736345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769845 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877521 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877578 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877711 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877800 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.879166 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.879171 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.880096 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.884527 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.889813 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.892766 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.901859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.980496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980800 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980835 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980906 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:17.980877157 +0000 UTC m=+1190.476598079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:14 crc kubenswrapper[4739]: I0218 14:19:14.051764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.296738 4739 generic.go:334] "Generic (PLEG): container finished" podID="3866887c-44e3-4436-bd88-bbc56f572f77" containerID="81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" exitCode=0 Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.296812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399"} Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.369171 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58cc898c97-gzzx9" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" containerID="cri-o://0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" gracePeriod=15 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.310069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.311987 4739 generic.go:334] "Generic (PLEG): container finished" podID="869aa11b-eba7-4598-90dc-d50c642b9120" containerID="a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30" exitCode=0 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.312058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerDied","Data":"a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.316940 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerID="444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298" exitCode=0 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.317021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.320087 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.320132 4739 generic.go:334] "Generic (PLEG): container finished" podID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerID="0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" exitCode=2 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.321262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerDied","Data":"0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.321304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.323211 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.342506 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.375498707 podStartE2EDuration="1m5.34248723s" podCreationTimestamp="2026-02-18 14:18:12 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.531954466 +0000 UTC m=+1129.027675398" lastFinishedPulling="2026-02-18 14:18:55.498942999 +0000 UTC m=+1167.994663921" observedRunningTime="2026-02-18 14:19:17.334064025 +0000 UTC m=+1189.829784947" watchObservedRunningTime="2026-02-18 14:19:17.34248723 +0000 UTC m=+1189.838208172" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.990265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990436 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990483 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990533 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:25.990516586 +0000 UTC m=+1198.486237508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.354592 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.354784 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnhmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(fdf07d43-6839-4ae1-9efd-bd21557e31f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.358285 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515"} Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.358332 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.388902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.431297 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-gf2dl" podStartSLOduration=15.431270732 podStartE2EDuration="15.431270732s" podCreationTimestamp="2026-02-18 14:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:17.404157191 +0000 UTC m=+1189.899878113" watchObservedRunningTime="2026-02-18 14:19:18.431270732 +0000 UTC m=+1190.926991664" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502389 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.507989 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt" (OuterVolumeSpecName: "kube-api-access-sk2jt") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "kube-api-access-sk2jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.556389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.568878 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config" (OuterVolumeSpecName: "config") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.569302 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.569534 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh5d6h8dh574h586h548h5f8h657h84h9dh8chc6h84h5c7h57h8hc6h559h88h57h64dhb8h95h9fh647h67dh55ch65hffh559hb5h695q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:nf4hcfh556hddh557h5fbhd5h5dfh575h5b5h694h55ch55bh674h67h5d4hdfh5b9h54fh597hc7h598h9h8h568h5b5h55fh78h566h676h54h577q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:nc7h674h559h684h5c8h77hbfh55h5b8h5c6h5f9h5cdh75h67bh55fh67fh5f5h5bbh66ch4h556h558hbfh5dh57bh588h56dhc7h68h57chc4h86q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n5cch5f9h7fhbhbch58fhd8h58bh659h5c5h67dh66fh5h6fh545hbh68dh685h5fdh676h599h679h5ffh585h5f6h5c5h588h667h676h575h5h7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4lkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(b3be45be-9ee4-4114-b2e5-78d9b0341129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.585344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607046 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607082 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607095 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607105 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.900260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.934903 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.934983 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.015522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.016630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.016942 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.018300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.018954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca" (OuterVolumeSpecName: "service-ca") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020769 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config" (OuterVolumeSpecName: "console-config") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022246 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022277 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022287 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022297 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.026750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c" (OuterVolumeSpecName: "kube-api-access-f7h7c") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "kube-api-access-f7h7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.026867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.028675 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.119406 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124002 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124089 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124155 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.366301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerStarted","Data":"542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.368666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.373374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerStarted","Data":"bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.374360 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375599 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerDied","Data":"df9030b739dbc83cef12914ae8d05fcfaf3c9ae9c31af8304d4b753fc912b097"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375793 4739 scope.go:117] "RemoveContainer" containerID="0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.380339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.383219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"6c7fb6f1999ca15fc619b1f0a7989fe1807e432b96c137cfc426b535e81aa656"} Feb 18 14:19:19 crc kubenswrapper[4739]: E0218 14:19:19.385097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.393361 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371971.461437 podStartE2EDuration="1m5.393337827s" podCreationTimestamp="2026-02-18 14:18:14 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.865636745 +0000 UTC m=+1129.361357667" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:19.38990865 +0000 UTC m=+1191.885629582" watchObservedRunningTime="2026-02-18 14:19:19.393337827 +0000 UTC m=+1191.889058749" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.436028 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" podStartSLOduration=11.436007274 podStartE2EDuration="11.436007274s" podCreationTimestamp="2026-02-18 14:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:19.433702615 +0000 UTC m=+1191.929423547" watchObservedRunningTime="2026-02-18 14:19:19.436007274 +0000 UTC m=+1191.931728196" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.464319 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.472582 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.481868 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.491113 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:19:20 crc kubenswrapper[4739]: E0218 14:19:20.412338 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:20 crc kubenswrapper[4739]: I0218 14:19:20.479057 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" path="/var/lib/kubelet/pods/3866887c-44e3-4436-bd88-bbc56f572f77/volumes" Feb 18 14:19:20 crc kubenswrapper[4739]: I0218 14:19:20.480337 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" path="/var/lib/kubelet/pods/4cd95c4f-592d-4c7e-bdeb-ec99b168126b/volumes" Feb 18 14:19:21 crc kubenswrapper[4739]: I0218 14:19:21.748981 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:19:22 crc kubenswrapper[4739]: I0218 14:19:22.429036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30"} Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.355864 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.451982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerStarted","Data":"74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f"} Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.468466 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cfjpx" podStartSLOduration=7.3844129800000005 podStartE2EDuration="10.468428222s" podCreationTimestamp="2026-02-18 14:19:13 +0000 UTC" firstStartedPulling="2026-02-18 14:19:19.125033683 +0000 UTC m=+1191.620754605" lastFinishedPulling="2026-02-18 14:19:22.209048925 +0000 UTC m=+1194.704769847" observedRunningTime="2026-02-18 14:19:23.467606041 +0000 UTC m=+1195.963326973" watchObservedRunningTime="2026-02-18 14:19:23.468428222 +0000 UTC m=+1195.964149144" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.172467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.265935 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.266591 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-gf2dl" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" containerID="cri-o://bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" gracePeriod=10 Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.595577 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.595946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.862574 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.489340 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerID="bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" exitCode=0 Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.489411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb"} Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.570023 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.009593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.009902 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.080818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081012 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081037 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:42.081078389 +0000 UTC m=+1214.576799301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.090406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.595885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.886851 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887348 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="init" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="init" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887387 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887394 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887426 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887433 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887668 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887688 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.888504 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.894317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.901546 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.902963 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.944181 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.955759 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.060499 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.062165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.073171 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116243 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.117101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.117112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.151841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.162072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.217778 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.217904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.221153 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.240887 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.246662 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.247999 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.258753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.288380 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.334955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.336313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.357956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.359013 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.360597 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.384093 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.393888 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.404037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.436999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.437194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.438052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.473909 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.474611 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:27 crc kubenswrapper[4739]: E0218 14:19:27.475088 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475111 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: E0218 14:19:27.475148 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="init" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475155 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="init" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475337 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.476144 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.480759 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.484433 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540965 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541460 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.549331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz" (OuterVolumeSpecName: "kube-api-access-6rjsz") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "kube-api-access-6rjsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.552999 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.553768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"6c0344dcd1980d3e621d946739f4b13130dbeab96724b311a0270793512ebb0c"} Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.553806 4739 scope.go:117] "RemoveContainer" containerID="bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.611034 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.612754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config" (OuterVolumeSpecName: "config") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.617763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.622731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.638237 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.640873 4739 scope.go:117] "RemoveContainer" containerID="f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643092 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643251 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643411 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643562 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643576 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643586 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643599 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.646019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.662209 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.680249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.745184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.745611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.746994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.763674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.808847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.900636 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.915561 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.974071 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.054933 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.067562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.389068 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.451724 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" path="/var/lib/kubelet/pods/80f2df75-0584-449d-bd30-80aa45c8f5ff/volumes" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.613838 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.616247 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.620152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerStarted","Data":"9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.626727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerStarted","Data":"613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.629431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.632686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerStarted","Data":"00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.727827 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.729354 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.737646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.739265 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.796002 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: E0218 14:19:28.898916 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.900138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.900258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.918022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.919540 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.078736 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.094289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.110009 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.130267 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.156385 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.172953 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4c634d_6e65_4f6b_8001_0ac3e35a4801.slice/crio-54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155 WatchSource:0}: Error finding container 54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155: Status 404 returned error can't find the container with id 54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.194043 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.288480 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.399219 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c94ce9_7b1b_43bd_9c93_303d0e675809.slice/crio-26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d WatchSource:0}: Error finding container 26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d: Status 404 returned error can't find the container with id 26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.658185 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.665134 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0275833c_ab0c_4865_9c6e_5c8d54a5e238.slice/crio-c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a WatchSource:0}: Error finding container c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a: Status 404 returned error can't find the container with id c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.666885 4739 generic.go:334] "Generic (PLEG): container finished" podID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerID="cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.666987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerDied","Data":"cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.669888 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerID="0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.669945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerDied","Data":"0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.677646 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerStarted","Data":"b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.677702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerStarted","Data":"d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.690975 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerStarted","Data":"0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.691019 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerStarted","Data":"54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.702880 4739 generic.go:334] "Generic (PLEG): container finished" podID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerID="a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.703003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerDied","Data":"a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.718303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerStarted","Data":"b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.718347 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerStarted","Data":"26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.725151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.726693 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-973a-account-create-update-lsz5w" podStartSLOduration=2.726675104 podStartE2EDuration="2.726675104s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.716553917 +0000 UTC m=+1202.212274839" watchObservedRunningTime="2026-02-18 14:19:29.726675104 +0000 UTC m=+1202.222396026" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.754631 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4dc5-account-create-update-shnqq" podStartSLOduration=2.7546062559999998 podStartE2EDuration="2.754606256s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.728727527 +0000 UTC m=+1202.224448449" watchObservedRunningTime="2026-02-18 14:19:29.754606256 +0000 UTC m=+1202.250327188" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.774383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-x8lmx" podStartSLOduration=2.774363579 podStartE2EDuration="2.774363579s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.75831268 +0000 UTC m=+1202.254033602" watchObservedRunningTime="2026-02-18 14:19:29.774363579 +0000 UTC m=+1202.270084501" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.812920 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:30 crc kubenswrapper[4739]: E0218 14:19:30.312079 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846b1cf2_bffb_4eca_a8f2_f3c0fcc7ac4b.slice/crio-aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0275833c_ab0c_4865_9c6e_5c8d54a5e238.slice/crio-conmon-06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.739932 4739 generic.go:334] "Generic (PLEG): container finished" podID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerID="0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.740054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerDied","Data":"0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.744868 4739 generic.go:334] "Generic (PLEG): container finished" podID="e1637477-36b3-4dea-b260-15b6e2532af8" containerID="b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.744957 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerDied","Data":"b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.747570 4739 generic.go:334] "Generic (PLEG): container finished" podID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerID="aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.747664 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.754339 4739 generic.go:334] "Generic (PLEG): container finished" podID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerID="b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.754469 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerDied","Data":"b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759181 4739 generic.go:334] "Generic (PLEG): container finished" podID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerID="a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerDied","Data":"a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerStarted","Data":"8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764746 4739 generic.go:334] "Generic (PLEG): container finished" podID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerID="06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerDied","Data":"06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerStarted","Data":"c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.767175 4739 generic.go:334] "Generic (PLEG): container finished" podID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerID="a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.767321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.429430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zz64p" podUID="7289493d-f197-436b-bc45-84721d12c034" containerName="ovn-controller" probeResult="failure" output=< Feb 18 14:19:31 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 14:19:31 crc kubenswrapper[4739]: > Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.445952 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.485424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.488019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.570945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.571049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.573657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08bf9ca-ebbc-4d72-b227-20a5c7eed529" (UID: "b08bf9ca-ebbc-4d72-b227-20a5c7eed529"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.577870 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.587768 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld" (OuterVolumeSpecName: "kube-api-access-9lpld") pod "b08bf9ca-ebbc-4d72-b227-20a5c7eed529" (UID: "b08bf9ca-ebbc-4d72-b227-20a5c7eed529"). InnerVolumeSpecName "kube-api-access-9lpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.681047 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.713591 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.721678 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766130 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766645 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766668 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766695 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766704 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766722 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766729 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766988 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.767018 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.767028 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.768829 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.772898 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784416 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.785904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "075a587a-4bf2-43e9-8c63-1357e9cb05c9" (UID: "075a587a-4bf2-43e9-8c63-1357e9cb05c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.786779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" (UID: "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.793615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp" (OuterVolumeSpecName: "kube-api-access-7nplp") pod "075a587a-4bf2-43e9-8c63-1357e9cb05c9" (UID: "075a587a-4bf2-43e9-8c63-1357e9cb05c9"). InnerVolumeSpecName "kube-api-access-7nplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.804176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.807585 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerID="a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655" exitCode=0 Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.807673 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.809048 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf" (OuterVolumeSpecName: "kube-api-access-htdjf") pod "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" (UID: "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66"). InnerVolumeSpecName "kube-api-access-htdjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.812985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.818330 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.821870 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830568 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerDied","Data":"9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830620 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.836737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.835969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerDied","Data":"613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.851011 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.859214 4739 generic.go:334] "Generic (PLEG): container finished" podID="70500a97-2717-4761-884a-25cf8ab89380" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" exitCode=0 Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.859327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.867420 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.22561994 podStartE2EDuration="1m13.8673985s" podCreationTimestamp="2026-02-18 14:18:18 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.095924675 +0000 UTC m=+1156.591645597" lastFinishedPulling="2026-02-18 14:19:30.737703235 +0000 UTC m=+1203.233424157" observedRunningTime="2026-02-18 14:19:31.846547259 +0000 UTC m=+1204.342268191" watchObservedRunningTime="2026-02-18 14:19:31.8673985 +0000 UTC m=+1204.363119442" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.887954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890096 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890112 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890123 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerDied","Data":"00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891904 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891975 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.903541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.904592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.942898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.936845504 podStartE2EDuration="1m20.942871062s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.492961962 +0000 UTC m=+1128.988682894" lastFinishedPulling="2026-02-18 14:18:55.49898753 +0000 UTC m=+1167.994708452" observedRunningTime="2026-02-18 14:19:31.938662115 +0000 UTC m=+1204.434383047" watchObservedRunningTime="2026-02-18 14:19:31.942871062 +0000 UTC m=+1204.438592004" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.993365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.994321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.999355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.001413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.001687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002410 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.033725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.085390 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371955.769403 podStartE2EDuration="1m21.085372532s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.538033798 +0000 UTC m=+1129.033754710" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:32.081928984 +0000 UTC m=+1204.577649916" watchObservedRunningTime="2026-02-18 14:19:32.085372532 +0000 UTC m=+1204.581093454" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.105628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.587614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.737023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.737279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.738269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e4c634d-6e65-4f6b-8001-0ac3e35a4801" (UID: "8e4c634d-6e65-4f6b-8001-0ac3e35a4801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.738968 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.746852 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg" (OuterVolumeSpecName: "kube-api-access-tsqzg") pod "8e4c634d-6e65-4f6b-8001-0ac3e35a4801" (UID: "8e4c634d-6e65-4f6b-8001-0ac3e35a4801"). InnerVolumeSpecName "kube-api-access-tsqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.844225 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.903238 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.949113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.952655 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.958520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.958841 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.962606 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:32 crc kubenswrapper[4739]: E0218 14:19:32.963034 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963054 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: E0218 14:19:32.963091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963098 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963328 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.967295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.970971 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerDied","Data":"c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991344 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.995561 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997050 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerDied","Data":"54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155"} Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997089 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997129 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.021317 4739 generic.go:334] "Generic (PLEG): container finished" podID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerID="74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f" exitCode=0 Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.022230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerDied","Data":"74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f"} Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.025272 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.139993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"e1637477-36b3-4dea-b260-15b6e2532af8\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140215 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"c50e4a24-ad83-4694-be4d-6b0811726c3d\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140321 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"e1637477-36b3-4dea-b260-15b6e2532af8\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"c50e4a24-ad83-4694-be4d-6b0811726c3d\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.147357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1637477-36b3-4dea-b260-15b6e2532af8" (UID: "e1637477-36b3-4dea-b260-15b6e2532af8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.147921 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0275833c-ab0c-4865-9c6e-5c8d54a5e238" (UID: "0275833c-ab0c-4865-9c6e-5c8d54a5e238"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.148389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c50e4a24-ad83-4694-be4d-6b0811726c3d" (UID: "c50e4a24-ad83-4694-be4d-6b0811726c3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.150555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc" (OuterVolumeSpecName: "kube-api-access-m9mcc") pod "e1637477-36b3-4dea-b260-15b6e2532af8" (UID: "e1637477-36b3-4dea-b260-15b6e2532af8"). InnerVolumeSpecName "kube-api-access-m9mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.161387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq" (OuterVolumeSpecName: "kube-api-access-bhjgq") pod "0275833c-ab0c-4865-9c6e-5c8d54a5e238" (UID: "0275833c-ab0c-4865-9c6e-5c8d54a5e238"). InnerVolumeSpecName "kube-api-access-bhjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.162749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz" (OuterVolumeSpecName: "kube-api-access-ppqdz") pod "c50e4a24-ad83-4694-be4d-6b0811726c3d" (UID: "c50e4a24-ad83-4694-be4d-6b0811726c3d"). InnerVolumeSpecName "kube-api-access-ppqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.196043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.214020 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.629427603 podStartE2EDuration="1m22.213991949s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.559907305 +0000 UTC m=+1129.055628227" lastFinishedPulling="2026-02-18 14:18:56.144471651 +0000 UTC m=+1168.640192573" observedRunningTime="2026-02-18 14:19:32.993217065 +0000 UTC m=+1205.488937997" watchObservedRunningTime="2026-02-18 14:19:33.213991949 +0000 UTC m=+1205.709712871" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.227605 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=42.692008945 podStartE2EDuration="1m22.227583095s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.605654428 +0000 UTC m=+1129.101375350" lastFinishedPulling="2026-02-18 14:18:56.141228578 +0000 UTC m=+1168.636949500" observedRunningTime="2026-02-18 14:19:33.040847429 +0000 UTC m=+1205.536568351" watchObservedRunningTime="2026-02-18 14:19:33.227583095 +0000 UTC m=+1205.723304037" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.243894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.243999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244277 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244298 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244307 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244327 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244335 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.285513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345416 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.346169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.346648 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8c94ce9-7b1b-43bd-9c93-303d0e675809" (UID: "f8c94ce9-7b1b-43bd-9c93-303d0e675809"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.347038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.363846 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq" (OuterVolumeSpecName: "kube-api-access-qdpkq") pod "f8c94ce9-7b1b-43bd-9c93-303d0e675809" (UID: "f8c94ce9-7b1b-43bd-9c93-303d0e675809"). InnerVolumeSpecName "kube-api-access-qdpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.375374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.428261 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:33 crc kubenswrapper[4739]: W0218 14:19:33.439272 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c00b9a_453b_4ec4_b98c_60547e6987ac.slice/crio-394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f WatchSource:0}: Error finding container 394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f: Status 404 returned error can't find the container with id 394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.454746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.454793 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.586065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.035441 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.035476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerDied","Data":"8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.036040 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.038966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerStarted","Data":"405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.039042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerStarted","Data":"394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.040995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerDied","Data":"d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.041039 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.041104 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerDied","Data":"26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047917 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047937 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.065016 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p-config-rjp7j" podStartSLOduration=3.064995204 podStartE2EDuration="3.064995204s" podCreationTimestamp="2026-02-18 14:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:34.060197571 +0000 UTC m=+1206.555918493" watchObservedRunningTime="2026-02-18 14:19:34.064995204 +0000 UTC m=+1206.560716126" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.255696 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.689861 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790086 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790153 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790176 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790239 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790383 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.791148 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.791431 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.798254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f" (OuterVolumeSpecName: "kube-api-access-twx6f") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "kube-api-access-twx6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.799687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.818789 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts" (OuterVolumeSpecName: "scripts") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.838615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.838925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893155 4739 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893197 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893216 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893227 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893239 4739 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893249 4739 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893258 4739 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.058668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerStarted","Data":"4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.058724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerStarted","Data":"8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.061675 4739 generic.go:334] "Generic (PLEG): container finished" podID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerID="405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea" exitCode=0 Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.061836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerDied","Data":"405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerDied","Data":"542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063571 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063586 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.082146 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j927w" podStartSLOduration=3.082123111 podStartE2EDuration="3.082123111s" podCreationTimestamp="2026-02-18 14:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:35.073324017 +0000 UTC m=+1207.569044949" watchObservedRunningTime="2026-02-18 14:19:35.082123111 +0000 UTC m=+1207.577844023" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.640761 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.640908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.646427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.089602 4739 generic.go:334] "Generic (PLEG): container finished" podID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerID="4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647" exitCode=0 Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.092184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerDied","Data":"4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647"} Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.093397 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.391972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zz64p" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.587824 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635288 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635369 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635458 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635478 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run" (OuterVolumeSpecName: "var-run") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636066 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636083 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636092 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts" (OuterVolumeSpecName: "scripts") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.641432 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf" (OuterVolumeSpecName: "kube-api-access-q5vrf") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "kube-api-access-q5vrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738121 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738424 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738538 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.153797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"24aebdd733cf86d50f4d81a80351f3ecdfb5d71c209f40b4f4767559533e0933"} Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.154571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerDied","Data":"394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f"} Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160460 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160560 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.206598 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.632131139 podStartE2EDuration="35.206574933s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="2026-02-18 14:19:04.275681719 +0000 UTC m=+1176.771402641" lastFinishedPulling="2026-02-18 14:19:35.850125513 +0000 UTC m=+1208.345846435" observedRunningTime="2026-02-18 14:19:37.183245279 +0000 UTC m=+1209.678966221" watchObservedRunningTime="2026-02-18 14:19:37.206574933 +0000 UTC m=+1209.702295875" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237021 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237556 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237604 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237614 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237630 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237638 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237652 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237659 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237675 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237701 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237937 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237975 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237993 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238015 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238037 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.243993 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.244199 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gvb8h" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.252560 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363690 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.477020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.480155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.491635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.496621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.579488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.714502 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.777275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.777910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.779224 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "009b4d4e-6b53-4e8d-a03e-79c96c50425b" (UID: "009b4d4e-6b53-4e8d-a03e-79c96c50425b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.790394 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh" (OuterVolumeSpecName: "kube-api-access-szljh") pod "009b4d4e-6b53-4e8d-a03e-79c96c50425b" (UID: "009b4d4e-6b53-4e8d-a03e-79c96c50425b"). InnerVolumeSpecName "kube-api-access-szljh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.790554 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.817704 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.881013 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.881054 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.883998 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.884537 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.884557 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.884837 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.885773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.892936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.903225 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983063 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.087325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.087373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.088109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.088612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.108866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179154 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerDied","Data":"8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c"} Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179597 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.225411 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.324377 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.456762 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" path="/var/lib/kubelet/pods/42c00b9a-453b-4ec4-b98c-60547e6987ac/volumes" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.831433 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.048377 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.049742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.064771 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.113774 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.116575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.205670 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerStarted","Data":"2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28"} Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.207284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerStarted","Data":"8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763"} Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.220861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.221201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.230621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.261074 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.307511 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.309184 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.315747 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.329803 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.375919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.381747 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" containerID="cri-o://20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382524 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" containerID="cri-o://420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382646 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" containerID="cri-o://33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.425051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.425178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.462925 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.479745 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.527093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.528265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.529007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.558612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.643470 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.005592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223711 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223751 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223765 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223854 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.226480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerStarted","Data":"7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.230219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerStarted","Data":"aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.253469 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p-config-zqwr9" podStartSLOduration=3.253416808 podStartE2EDuration="3.253416808s" podCreationTimestamp="2026-02-18 14:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:40.248979825 +0000 UTC m=+1212.744700757" watchObservedRunningTime="2026-02-18 14:19:40.253416808 +0000 UTC m=+1212.749137730" Feb 18 14:19:40 crc kubenswrapper[4739]: W0218 14:19:40.267647 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0b9a6cb_633e_4390_b1f9_048bc4a7a6ff.slice/crio-33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9 WatchSource:0}: Error finding container 33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9: Status 404 returned error can't find the container with id 33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.274819 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.429984 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" path="/var/lib/kubelet/pods/009b4d4e-6b53-4e8d-a03e-79c96c50425b/volumes" Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.641722 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.252917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerStarted","Data":"03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.255618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerStarted","Data":"040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.255660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerStarted","Data":"33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.257516 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerID="7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3" exitCode=0 Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.257542 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerDied","Data":"7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.280568 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" podStartSLOduration=2.280549609 podStartE2EDuration="2.280549609s" podCreationTimestamp="2026-02-18 14:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:41.277271745 +0000 UTC m=+1213.772992667" watchObservedRunningTime="2026-02-18 14:19:41.280549609 +0000 UTC m=+1213.776270531" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.331922 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" podStartSLOduration=2.331901707 podStartE2EDuration="2.331901707s" podCreationTimestamp="2026-02-18 14:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:41.328275005 +0000 UTC m=+1213.823995927" watchObservedRunningTime="2026-02-18 14:19:41.331901707 +0000 UTC m=+1213.827622629" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.520813 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.597134 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.597200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.599948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.600299 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.603883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.604023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.605574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config" (OuterVolumeSpecName: "config") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.608007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt" (OuterVolumeSpecName: "kube-api-access-vnhmt") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "kube-api-access-vnhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.610786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.612574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out" (OuterVolumeSpecName: "config-out") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.638687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config" (OuterVolumeSpecName: "web-config") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.671455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "pvc-065eb27a-babd-4c1e-9733-7075a750b869". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699699 4739 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699729 4739 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699739 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699748 4739 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699758 4739 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699766 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699775 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699784 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699809 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") on node \"crc\" " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699820 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.744209 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.744381 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-065eb27a-babd-4c1e-9733-7075a750b869" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869") on node "crc" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.801756 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.107493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.118093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.222741 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.299783 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.302386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"f97314f9f73b65ab6d585d1190d55be82b1924ce7010a229a6c53d15da07f316"} Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.302473 4739 scope.go:117] "RemoveContainer" containerID="420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.401918 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.407056 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.439407 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" path="/var/lib/kubelet/pods/fdf07d43-6839-4ae1-9efd-bd21557e31f0/volumes" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.449594 4739 scope.go:117] "RemoveContainer" containerID="33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.456736 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457882 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="init-config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457902 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="init-config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457937 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457945 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457956 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457965 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457988 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457996 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458266 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458281 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.461276 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466312 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466501 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466662 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466829 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.467137 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nz745" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.467178 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.476476 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.481656 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.487721 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529986 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530012 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530215 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530369 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.562149 4739 scope.go:117] "RemoveContainer" containerID="20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.588035 4739 scope.go:117] "RemoveContainer" containerID="d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.636512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.636755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.637426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.643012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.643045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.644364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.646990 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.647034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.649309 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.649866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.652898 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.654020 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.654104 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01cfb519e92c9e23501f00a5b6c703ca97cb1b944d5fe5c6aa349ce505ad2fe2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.663951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.676073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.677693 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.679115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.724623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.809802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.938118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.965292 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: W0218 14:19:42.991185 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4da69d20_d4af_4d8d_b1e1_5026676d2078.slice/crio-351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4 WatchSource:0}: Error finding container 351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4: Status 404 returned error can't find the container with id 351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049819 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run" (OuterVolumeSpecName: "var-run") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050466 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050482 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.051109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts" (OuterVolumeSpecName: "scripts") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.051690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.056993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87" (OuterVolumeSpecName: "kube-api-access-mtd87") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "kube-api-access-mtd87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.097164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111082 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:43 crc kubenswrapper[4739]: E0218 14:19:43.111582 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111604 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111821 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.112542 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.116001 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.116363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.123437 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.152317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.152587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153044 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153074 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153086 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153099 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.217829 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.255189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.255324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.256054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.275132 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.287782 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerDied","Data":"8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323725 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.326008 4739 generic.go:334] "Generic (PLEG): container finished" podID="4689ea28-dac4-434f-af87-18d6fc903330" containerID="03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5" exitCode=0 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.326054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerDied","Data":"03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.334227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.369651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:43 crc kubenswrapper[4739]: W0218 14:19:43.370263 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06c16940_f153_4d15_891d_b0b91e9bce5a.slice/crio-21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4 WatchSource:0}: Error finding container 21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4: Status 404 returned error can't find the container with id 21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.380768 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.392258 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.436925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.012653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.345682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerStarted","Data":"17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.346038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerStarted","Data":"a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.347407 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.349095 4739 generic.go:334] "Generic (PLEG): container finished" podID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerID="040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b" exitCode=0 Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.349180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerDied","Data":"040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.426667 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" path="/var/lib/kubelet/pods/c9b1f63c-45e3-41c6-b25a-7136017ef699/volumes" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.385764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerDied","Data":"aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b"} Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.386293 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.391188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.445203 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x4jss" podStartSLOduration=2.445177465 podStartE2EDuration="2.445177465s" podCreationTimestamp="2026-02-18 14:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:45.411977019 +0000 UTC m=+1217.907697951" watchObservedRunningTime="2026-02-18 14:19:45.445177465 +0000 UTC m=+1217.940898397" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.507157 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"4689ea28-dac4-434f-af87-18d6fc903330\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.507346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"4689ea28-dac4-434f-af87-18d6fc903330\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.508343 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4689ea28-dac4-434f-af87-18d6fc903330" (UID: "4689ea28-dac4-434f-af87-18d6fc903330"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.606615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf" (OuterVolumeSpecName: "kube-api-access-9xwmf") pod "4689ea28-dac4-434f-af87-18d6fc903330" (UID: "4689ea28-dac4-434f-af87-18d6fc903330"). InnerVolumeSpecName "kube-api-access-9xwmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.610079 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.610112 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.945671 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.017422 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.017556 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.018660 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" (UID: "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.025797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57" (OuterVolumeSpecName: "kube-api-access-69l57") pod "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" (UID: "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff"). InnerVolumeSpecName "kube-api-access-69l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.120541 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.120574 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.412656 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.412804 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.435926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerDied","Data":"33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9"} Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.436048 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9" Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.421996 4739 generic.go:334] "Generic (PLEG): container finished" podID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerID="17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa" exitCode=0 Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.422098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerDied","Data":"17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa"} Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.424935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b"} Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.818437 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:49 crc kubenswrapper[4739]: E0218 14:19:49.819273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: E0218 14:19:49.819315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819608 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.820364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.822700 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.834438 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.008942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.009005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.009343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.119323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.122102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.144473 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.442216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.090910 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.114853 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.214139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.286295 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.628049 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.715384 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.750742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"be735ec5-4c83-4f86-bffd-b42877b96df2\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.750799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"be735ec5-4c83-4f86-bffd-b42877b96df2\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.753315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be735ec5-4c83-4f86-bffd-b42877b96df2" (UID: "be735ec5-4c83-4f86-bffd-b42877b96df2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.759514 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx" (OuterVolumeSpecName: "kube-api-access-tbhvx") pod "be735ec5-4c83-4f86-bffd-b42877b96df2" (UID: "be735ec5-4c83-4f86-bffd-b42877b96df2"). InnerVolumeSpecName "kube-api-access-tbhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.853242 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.853278 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.084702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.516908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerStarted","Data":"2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.518588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerStarted","Data":"7802eb786f9fd65a5a871491a73453af4c3e9308ab2608296cd37aed4159f91a"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.525972 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.526874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerDied","Data":"a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.526911 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.541998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"6364d38b568225606ea29cbbac819b4d068a82e4af7e2fe3065262d324d7595b"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.542058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"98c9ca66c056cd72cee2dbfab7c52802a7407dbb78e0422f911a6292a8ab063e"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.542071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"a4e66e06ee6342e149e83bd665b7b281fb957c43aed72691bcc66fc29591f16e"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.555571 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gnm8m" podStartSLOduration=2.052010297 podStartE2EDuration="19.555547324s" podCreationTimestamp="2026-02-18 14:19:37 +0000 UTC" firstStartedPulling="2026-02-18 14:19:38.359391525 +0000 UTC m=+1210.855112447" lastFinishedPulling="2026-02-18 14:19:55.862928552 +0000 UTC m=+1228.358649474" observedRunningTime="2026-02-18 14:19:56.542568363 +0000 UTC m=+1229.038289295" watchObservedRunningTime="2026-02-18 14:19:56.555547324 +0000 UTC m=+1229.051268256" Feb 18 14:19:57 crc kubenswrapper[4739]: I0218 14:19:57.571177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"022adf5a640f63c67ffd622879ebd72c6fce8adb1af8152426ca84e9ab05b2b1"} Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.372740 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.373093 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.486019 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.494522 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.423277 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" path="/var/lib/kubelet/pods/be735ec5-4c83-4f86-bffd-b42877b96df2/volumes" Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.599548 4739 generic.go:334] "Generic (PLEG): container finished" podID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerID="06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b" exitCode=0 Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.599594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerDied","Data":"06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.611715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerStarted","Data":"9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.616772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"86c19524753499efa01c12762f90aea45a5f08487a361af23f3b7422ebef8ddc"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.637523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=7.468935149 podStartE2EDuration="12.637496488s" podCreationTimestamp="2026-02-18 14:19:49 +0000 UTC" firstStartedPulling="2026-02-18 14:19:56.085697757 +0000 UTC m=+1228.581418669" lastFinishedPulling="2026-02-18 14:20:01.254259086 +0000 UTC m=+1233.749980008" observedRunningTime="2026-02-18 14:20:01.628882357 +0000 UTC m=+1234.124603289" watchObservedRunningTime="2026-02-18 14:20:01.637496488 +0000 UTC m=+1234.133217410" Feb 18 14:20:02 crc kubenswrapper[4739]: I0218 14:20:02.631355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"47f5ed40bbfc91c681580085413de127aa97c26338b3d29f55f0c76bcae69a71"} Feb 18 14:20:02 crc kubenswrapper[4739]: I0218 14:20:02.631724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"982ee4400eca1d47aba235dc30f45a5c6d75edfdc3b76ef08c8d0cae89424fc5"} Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.093288 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.115197 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.215250 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.290083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.645702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"0d6224db6b1d6e414645a7cbe83e9521f38c527b68c3202589593299ce7f369c"} Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.646830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"c5405471c095e5ebc4f4f1e78f4e1b1a568f4fae2058b26294cc796be04ab829"} Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.523251 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:04 crc kubenswrapper[4739]: E0218 14:20:04.524427 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.524881 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.525540 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.526897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.529910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.571074 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.651903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.652087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.754544 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.754685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.755273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.773462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.861043 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.629927 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.673261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"2d7eba74d22f044df34fcf837e65ca0f2e9a819ee38ddcd2edfc0c8d1ca54976"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.673307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"2514064716b3f6a4ca2240a403645f2b949cf1307be4e104acdf8555dd6f695f"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.676101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerStarted","Data":"26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.745947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.745930014 podStartE2EDuration="23.745930014s" podCreationTimestamp="2026-02-18 14:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:05.734187423 +0000 UTC m=+1238.229908365" watchObservedRunningTime="2026-02-18 14:20:05.745930014 +0000 UTC m=+1238.241650936" Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.687824 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerStarted","Data":"fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694076 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"08f1f5fdf0cc1b8ae1c2f6360da0f9744802d35e300a51dcbf03f8cbd0791ae3"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"8ff2ed06d7bb7a33eef044e31c1e17c774d5a47a46df71234fadeed94140a689"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"bf70db47279e4273c5b2a9187d41b22c243f16b3ace68331e4844615ed31e986"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"c201776fb96bab117bcbb2847e1926f7b7ab16c52d0e274cbd21b4dfa2dc8812"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"cd3e0ca5aeb4f731de00ccbe044fc94f659ea6e13f34bc55d0f10e50f7b38d27"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"5c9deabeebe1cca4f87bce896721938f30e09175649f837fbc18025790d74574"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.715069 4739 generic.go:334] "Generic (PLEG): container finished" podID="f1df0b15-6927-4300-b034-6b5c3308320d" containerID="fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41" exitCode=0 Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.715119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerDied","Data":"fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.810296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:08 crc kubenswrapper[4739]: I0218 14:20:08.731908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"784b00045d7b56ff771b3e749626f57d6b1b5dae332b4dc6eb4708c5bf3ddaa3"} Feb 18 14:20:08 crc kubenswrapper[4739]: I0218 14:20:08.776246 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.130451193 podStartE2EDuration="59.776228753s" podCreationTimestamp="2026-02-18 14:19:09 +0000 UTC" firstStartedPulling="2026-02-18 14:19:42.999175834 +0000 UTC m=+1215.494896756" lastFinishedPulling="2026-02-18 14:20:05.644953394 +0000 UTC m=+1238.140674316" observedRunningTime="2026-02-18 14:20:08.772415565 +0000 UTC m=+1241.268136517" watchObservedRunningTime="2026-02-18 14:20:08.776228753 +0000 UTC m=+1241.271949675" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.071966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.074333 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.077048 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.099007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178027 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.188575 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.280267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"f1df0b15-6927-4300-b034-6b5c3308320d\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.280548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"f1df0b15-6927-4300-b034-6b5c3308320d\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281219 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1df0b15-6927-4300-b034-6b5c3308320d" (UID: "f1df0b15-6927-4300-b034-6b5c3308320d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281332 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281488 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.282185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283991 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.284657 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.292657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46" (OuterVolumeSpecName: "kube-api-access-tnk46") pod "f1df0b15-6927-4300-b034-6b5c3308320d" (UID: "f1df0b15-6927-4300-b034-6b5c3308320d"). InnerVolumeSpecName "kube-api-access-tnk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.301190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.383794 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.485419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.751829 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.751880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerDied","Data":"26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad"} Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.753085 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.991009 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763363 4739 generic.go:334] "Generic (PLEG): container finished" podID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerID="0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1" exitCode=0 Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763580 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1"} Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerStarted","Data":"4d2046f9d4641d243874fd60e2cf83edd0111ff1d89b77492ced2775ebec2c2c"} Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.775980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerStarted","Data":"56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c"} Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.776757 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.808883 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" podStartSLOduration=2.808865961 podStartE2EDuration="2.808865961s" podCreationTimestamp="2026-02-18 14:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:11.802274932 +0000 UTC m=+1244.297995854" watchObservedRunningTime="2026-02-18 14:20:11.808865961 +0000 UTC m=+1244.304586883" Feb 18 14:20:12 crc kubenswrapper[4739]: I0218 14:20:12.810056 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:12 crc kubenswrapper[4739]: I0218 14:20:12.816556 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.118843 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.217405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.801734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.326250 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:15 crc kubenswrapper[4739]: E0218 14:20:15.326975 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.326989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.327197 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.327862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.339956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.416298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.416780 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.498707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.500370 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.503465 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.508264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.510318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.519754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.519892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.520638 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.529907 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.536718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.568819 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.570376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.594139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.620846 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622501 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.647205 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.656428 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.657970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.660598 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.705653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725320 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.726385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.729533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.768206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.769116 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.828984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829509 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.830857 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.831375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.837374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.869804 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.897902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.932177 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.952824 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.957488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.988968 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.115004 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.118040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.126414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.126635 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.127992 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.128762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.179677 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.180161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.191129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.211181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.211316 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.244734 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.249572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.264552 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.277208 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.319877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.322351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.336710 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.338221 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.342408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.361686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.363154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.456788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.456872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.457394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.457530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.459086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.479795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.486979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.503321 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.515779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.561135 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562107 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.563055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.592883 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.597751 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.638865 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.671847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.671938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.674642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.693703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.752344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.763032 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.897766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerStarted","Data":"2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a"} Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.978840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.051489 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.164246 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.182793 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e60ca77_b621_4dfc_8b92_89d8cad06bf0.slice/crio-dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a WatchSource:0}: Error finding container dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a: Status 404 returned error can't find the container with id dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.483384 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.767769 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.786491 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.795784 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c90e24b_98c5_4e26_8819_a5ae1aef1102.slice/crio-edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81 WatchSource:0}: Error finding container edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81: Status 404 returned error can't find the container with id edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81 Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.799686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.882931 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.901771 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbeb37ff_68ee_4cc5_add5_18fc25605b6f.slice/crio-10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa WatchSource:0}: Error finding container 10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa: Status 404 returned error can't find the container with id 10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.910904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerStarted","Data":"1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.916105 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerStarted","Data":"6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.916160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerStarted","Data":"dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.918678 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerStarted","Data":"aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.941718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerStarted","Data":"6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.942059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerStarted","Data":"b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.945075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerStarted","Data":"b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.948988 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerStarted","Data":"983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.949044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerStarted","Data":"5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.955877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerStarted","Data":"0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.955935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerStarted","Data":"fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.959047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerStarted","Data":"edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.974399 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-rlcgk" podStartSLOduration=2.974376338 podStartE2EDuration="2.974376338s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.939518314 +0000 UTC m=+1250.435239256" watchObservedRunningTime="2026-02-18 14:20:17.974376338 +0000 UTC m=+1250.470097260" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.000318 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-tzg9c" podStartSLOduration=3.000287213 podStartE2EDuration="3.000287213s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.959439965 +0000 UTC m=+1250.455160887" watchObservedRunningTime="2026-02-18 14:20:18.000287213 +0000 UTC m=+1250.496008145" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.009105 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-c4dd-account-create-update-xvgtp" podStartSLOduration=3.009077758 podStartE2EDuration="3.009077758s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.982361873 +0000 UTC m=+1250.478082795" watchObservedRunningTime="2026-02-18 14:20:18.009077758 +0000 UTC m=+1250.504798690" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.031258 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1ad6-account-create-update-pz97t" podStartSLOduration=3.031233077 podStartE2EDuration="3.031233077s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:18.000134919 +0000 UTC m=+1250.495855851" watchObservedRunningTime="2026-02-18 14:20:18.031233077 +0000 UTC m=+1250.526953999" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.042511 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4km74" podStartSLOduration=3.042484795 podStartE2EDuration="3.042484795s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:18.014887857 +0000 UTC m=+1250.510608789" watchObservedRunningTime="2026-02-18 14:20:18.042484795 +0000 UTC m=+1250.538205717" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.988253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerStarted","Data":"76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.050329 4739 generic.go:334] "Generic (PLEG): container finished" podID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerID="6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.050429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerDied","Data":"6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.052406 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerID="e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.052465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerDied","Data":"e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.054497 4739 generic.go:334] "Generic (PLEG): container finished" podID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerID="983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.054538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerDied","Data":"983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.072049 4739 generic.go:334] "Generic (PLEG): container finished" podID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerID="0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.072124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerDied","Data":"0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.079906 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64f1-account-create-update-9xxvd" podStartSLOduration=4.079886089 podStartE2EDuration="4.079886089s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:19.049762456 +0000 UTC m=+1251.545483378" watchObservedRunningTime="2026-02-18 14:20:19.079886089 +0000 UTC m=+1251.575607011" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.118811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerStarted","Data":"f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.137782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerStarted","Data":"10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.163785 4739 generic.go:334] "Generic (PLEG): container finished" podID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerID="6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.163903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerDied","Data":"6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.185973 4739 generic.go:334] "Generic (PLEG): container finished" podID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerID="aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.186041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerDied","Data":"aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.311327 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d1d2-account-create-update-spvtj" podStartSLOduration=3.311300436 podStartE2EDuration="3.311300436s" podCreationTimestamp="2026-02-18 14:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:19.299553184 +0000 UTC m=+1251.795274106" watchObservedRunningTime="2026-02-18 14:20:19.311300436 +0000 UTC m=+1251.807021358" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.486612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.565331 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.581287 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" containerID="cri-o://bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" gracePeriod=10 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.199026 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerID="f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.199123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerDied","Data":"f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.207332 4739 generic.go:334] "Generic (PLEG): container finished" podID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerID="76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.207426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerDied","Data":"76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.209729 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerID="bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.209936 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.359290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541323 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541843 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541874 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.552719 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl" (OuterVolumeSpecName: "kube-api-access-zpddl") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "kube-api-access-zpddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.628095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.649009 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.649041 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.665258 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.713271 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.753149 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.753780 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.802042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config" (OuterVolumeSpecName: "config") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.862497 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.895229 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.052863 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.065012 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.070672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.070766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.081867 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.082957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b" (OuterVolumeSpecName: "kube-api-access-jqq5b") pod "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" (UID: "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a"). InnerVolumeSpecName "kube-api-access-jqq5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.084095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" (UID: "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.138794 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.148693 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174350 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174420 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175034 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175726 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175751 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.176041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f06df363-1196-4ba5-a5ba-d6e6c419a9d2" (UID: "f06df363-1196-4ba5-a5ba-d6e6c419a9d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.176370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39bd8e39-8e54-46e1-8217-dbdd74be8a8c" (UID: "39bd8e39-8e54-46e1-8217-dbdd74be8a8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.182878 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z" (OuterVolumeSpecName: "kube-api-access-8fk9z") pod "39bd8e39-8e54-46e1-8217-dbdd74be8a8c" (UID: "39bd8e39-8e54-46e1-8217-dbdd74be8a8c"). InnerVolumeSpecName "kube-api-access-8fk9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.184013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da457314-f1eb-477e-93c7-cf0d01e0f1e1" (UID: "da457314-f1eb-477e-93c7-cf0d01e0f1e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.186510 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2" (OuterVolumeSpecName: "kube-api-access-qc2b2") pod "f06df363-1196-4ba5-a5ba-d6e6c419a9d2" (UID: "f06df363-1196-4ba5-a5ba-d6e6c419a9d2"). InnerVolumeSpecName "kube-api-access-qc2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.197831 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh" (OuterVolumeSpecName: "kube-api-access-8p6qh") pod "da457314-f1eb-477e-93c7-cf0d01e0f1e1" (UID: "da457314-f1eb-477e-93c7-cf0d01e0f1e1"). InnerVolumeSpecName "kube-api-access-8p6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.274828 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.274837 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerDied","Data":"b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.275613 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.276725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277329 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277597 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"20e0fc8a-5942-417e-9fbb-4f94536db193\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"20e0fc8a-5942-417e-9fbb-4f94536db193\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278864 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278888 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278902 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278915 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.279113 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.279153 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.282513 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20e0fc8a-5942-417e-9fbb-4f94536db193" (UID: "20e0fc8a-5942-417e-9fbb-4f94536db193"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.282769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e60ca77-b621-4dfc-8b92-89d8cad06bf0" (UID: "4e60ca77-b621-4dfc-8b92-89d8cad06bf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.286524 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5" (OuterVolumeSpecName: "kube-api-access-jpqw5") pod "4e60ca77-b621-4dfc-8b92-89d8cad06bf0" (UID: "4e60ca77-b621-4dfc-8b92-89d8cad06bf0"). InnerVolumeSpecName "kube-api-access-jpqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291682 4739 scope.go:117] "RemoveContainer" containerID="bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291891 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.302166 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs" (OuterVolumeSpecName: "kube-api-access-92lcs") pod "20e0fc8a-5942-417e-9fbb-4f94536db193" (UID: "20e0fc8a-5942-417e-9fbb-4f94536db193"). InnerVolumeSpecName "kube-api-access-92lcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308857 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerDied","Data":"5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308951 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311415 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311416 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerDied","Data":"fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311885 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313509 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerDied","Data":"dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313690 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317486 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerDied","Data":"2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317556 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317726 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325229 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerDied","Data":"b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325553 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.346973 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.355374 4739 scope.go:117] "RemoveContainer" containerID="444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.356539 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391038 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391106 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391123 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391137 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.911916 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.922652 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.012262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.012541 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.014867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c90e24b-98c5-4e26-8819-a5ae1aef1102" (UID: "2c90e24b-98c5-4e26-8819-a5ae1aef1102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.032724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d" (OuterVolumeSpecName: "kube-api-access-8zr6d") pod "2c90e24b-98c5-4e26-8819-a5ae1aef1102" (UID: "2c90e24b-98c5-4e26-8819-a5ae1aef1102"). InnerVolumeSpecName "kube-api-access-8zr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.118952 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"4d208990-8bd6-4b82-bba8-200f5c7985d0\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.118986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"4d208990-8bd6-4b82-bba8-200f5c7985d0\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.119856 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.119882 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.122350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d208990-8bd6-4b82-bba8-200f5c7985d0" (UID: "4d208990-8bd6-4b82-bba8-200f5c7985d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.130709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg" (OuterVolumeSpecName: "kube-api-access-6vqvg") pod "4d208990-8bd6-4b82-bba8-200f5c7985d0" (UID: "4d208990-8bd6-4b82-bba8-200f5c7985d0"). InnerVolumeSpecName "kube-api-access-6vqvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.221736 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.222958 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerDied","Data":"edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81"} Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347310 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347377 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364818 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerDied","Data":"1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b"} Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364882 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.433263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" path="/var/lib/kubelet/pods/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0/volumes" Feb 18 14:20:23 crc kubenswrapper[4739]: I0218 14:20:23.379896 4739 generic.go:334] "Generic (PLEG): container finished" podID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerID="2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661" exitCode=0 Feb 18 14:20:23 crc kubenswrapper[4739]: I0218 14:20:23.380008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerDied","Data":"2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661"} Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.580847 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706480 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.712099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm" (OuterVolumeSpecName: "kube-api-access-bzphm") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "kube-api-access-bzphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.723856 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.738987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.772141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data" (OuterVolumeSpecName: "config-data") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809094 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809129 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809147 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809185 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.490194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerStarted","Data":"008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104"} Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerDied","Data":"2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28"} Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507568 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.543036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gsm82" podStartSLOduration=3.87687706 podStartE2EDuration="11.543010055s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="2026-02-18 14:20:17.90858474 +0000 UTC m=+1250.404305662" lastFinishedPulling="2026-02-18 14:20:25.574717735 +0000 UTC m=+1258.070438657" observedRunningTime="2026-02-18 14:20:26.528494023 +0000 UTC m=+1259.024214945" watchObservedRunningTime="2026-02-18 14:20:26.543010055 +0000 UTC m=+1259.038730987" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995898 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995912 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995925 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995933 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995943 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995968 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995974 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996009 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="init" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996016 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="init" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996028 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996034 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996046 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996052 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996062 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996069 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996077 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996083 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996094 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996099 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996268 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996280 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996293 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996313 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996326 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996337 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996349 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996360 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996375 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.997405 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.017634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142023 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245787 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.246062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.246082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.266253 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.316820 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: W0218 14:20:27.840470 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda95a3e0d_f263_464b_9406_0fc51724a068.slice/crio-e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8 WatchSource:0}: Error finding container e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8: Status 404 returned error can't find the container with id e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8 Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.842087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566151 4739 generic.go:334] "Generic (PLEG): container finished" podID="a95a3e0d-f263-464b-9406-0fc51724a068" containerID="521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27" exitCode=0 Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27"} Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerStarted","Data":"e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8"} Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.373506 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.374089 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.590619 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerStarted","Data":"2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc"} Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.590774 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.617348 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podStartSLOduration=3.617326893 podStartE2EDuration="3.617326893s" podCreationTimestamp="2026-02-18 14:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:29.609278706 +0000 UTC m=+1262.104999638" watchObservedRunningTime="2026-02-18 14:20:29.617326893 +0000 UTC m=+1262.113047815" Feb 18 14:20:36 crc kubenswrapper[4739]: I0218 14:20:36.672440 4739 generic.go:334] "Generic (PLEG): container finished" podID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerID="008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104" exitCode=0 Feb 18 14:20:36 crc kubenswrapper[4739]: I0218 14:20:36.672540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerDied","Data":"008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104"} Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.319347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.389484 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.389768 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" containerID="cri-o://56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" gracePeriod=10 Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.684980 4739 generic.go:334] "Generic (PLEG): container finished" podID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerID="56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" exitCode=0 Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.685049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c"} Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.968978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081436 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.093277 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d" (OuterVolumeSpecName: "kube-api-access-24b8d") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "kube-api-access-24b8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.141780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.149078 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.160099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.170849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config" (OuterVolumeSpecName: "config") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185551 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185587 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185599 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185614 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185626 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.188116 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.205269 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287532 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287970 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.291331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987" (OuterVolumeSpecName: "kube-api-access-lc987") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "kube-api-access-lc987". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.328031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.349607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data" (OuterVolumeSpecName: "config-data") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390375 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390416 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390427 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694793 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"4d2046f9d4641d243874fd60e2cf83edd0111ff1d89b77492ced2775ebec2c2c"} Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694849 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694860 4739 scope.go:117] "RemoveContainer" containerID="56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerDied","Data":"10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa"} Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696900 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.728375 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.729196 4739 scope.go:117] "RemoveContainer" containerID="0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.749175 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960291 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960781 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="init" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960808 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="init" Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960833 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960841 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960867 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.961082 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.961100 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.962167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.992430 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.010173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.021988 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.023902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028372 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028773 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.029136 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.029414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.054664 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.189133 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.189935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190197 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190359 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190996 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.191083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.193537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.194342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.199565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.199897 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.200145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.204585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.224264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.226393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.242456 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.255242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gcstc" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.255611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.257303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.281580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301787 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301894 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302048 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.327618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.328113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.331285 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.342431 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.357095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.359531 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.369871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9bgt9" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371484 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371600 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.377810 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.406901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407013 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407105 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407167 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407192 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407260 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.414216 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.414373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.415092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.512213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516246 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.522280 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.522362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.527013 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.538708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.560389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.641227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.668527 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.670991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.673986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.686799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.687267 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f4jrj" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.696555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.696743 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.698177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.714825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.715054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-crc55" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.715196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.727216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.727554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.728024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.749545 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.752535 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.779586 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.780167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.798769 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830351 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830522 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.855345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.877181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.888243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.906163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.914530 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.916837 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.948518 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952792 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.953003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.957399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.970585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.971357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.990239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.014811 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.016594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.016976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038089 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xnq4d" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038368 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.067161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.071989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184454 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184996 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.185039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.185922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.186474 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.215927 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.218622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.237967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.243188 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.265364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.304202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.304578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.305719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.306633 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.351684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.354305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.355199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.363469 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364119 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gvb8h" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.370479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: W0218 14:20:40.376934 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6a350a1_b153_4edb_b937_ff7ccec8d1de.slice/crio-9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473 WatchSource:0}: Error finding container 9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473: Status 404 returned error can't find the container with id 9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473 Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.378116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401812 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.460129 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" path="/var/lib/kubelet/pods/449c4682-2359-4fcc-8578-fd524beaf6d6/volumes" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.461514 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.465363 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.465493 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.469084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.469130 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506467 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510729 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.516198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.519091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.540485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.546683 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.554660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.589229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.601627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613842 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613920 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613951 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614023 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614135 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614252 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614547 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.615998 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.616361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.620552 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.622599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.626440 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.626499 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.634759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.637233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.645869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.669208 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.698065 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.722861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.722976 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723249 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.724162 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.725509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.732466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.733412 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.733466 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.741371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.742469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.753940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.787649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.824825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.840660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerStarted","Data":"70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b"} Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.844671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerStarted","Data":"9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473"} Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.873894 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.901736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.985101 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.337717 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.412142 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:41 crc kubenswrapper[4739]: W0218 14:20:41.572840 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3697715_3f94_4086_99ab_65a492bd7542.slice/crio-7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404 WatchSource:0}: Error finding container 7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404: Status 404 returned error can't find the container with id 7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404 Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.589682 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.604744 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.857229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerStarted","Data":"ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.860522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerStarted","Data":"7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.867063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerStarted","Data":"0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.868649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerStarted","Data":"b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.872320 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerID="1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca" exitCode=0 Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.872525 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerDied","Data":"1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.878665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerStarted","Data":"1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.939729 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pffpk" podStartSLOduration=3.939711038 podStartE2EDuration="3.939711038s" podCreationTimestamp="2026-02-18 14:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:41.886810711 +0000 UTC m=+1274.382531653" watchObservedRunningTime="2026-02-18 14:20:41.939711038 +0000 UTC m=+1274.435431960" Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.047402 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:42 crc kubenswrapper[4739]: W0218 14:20:42.054138 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b54fe6_91fa_4ba1_9a4e_135277494a27.slice/crio-6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7 WatchSource:0}: Error finding container 6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7: Status 404 returned error can't find the container with id 6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7 Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.079582 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.255012 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: W0218 14:20:42.263608 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddee39188_8dd1_45dd_afd8_ef4599d03adb.slice/crio-3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7 WatchSource:0}: Error finding container 3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7: Status 404 returned error can't find the container with id 3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7 Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.388313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.730124 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.942713 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.022277 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.127810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerStarted","Data":"d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.172389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.202599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerStarted","Data":"615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.206457 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.207281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.207305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.225516 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hc8hk" podStartSLOduration=4.225502462 podStartE2EDuration="4.225502462s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:43.223875571 +0000 UTC m=+1275.719596503" watchObservedRunningTime="2026-02-18 14:20:43.225502462 +0000 UTC m=+1275.721223384" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.258299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"012dc8f477dfe3bd25f7fe5decf6c00cb3c850250a18972e074f41544b597e70"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259134 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259278 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.263632 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.326480 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck" (OuterVolumeSpecName: "kube-api-access-w4nck") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "kube-api-access-w4nck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.334130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.350204 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.378328 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.378366 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.411817 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.426754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config" (OuterVolumeSpecName: "config") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.431904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.447261 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480185 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480490 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480560 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480620 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.272900 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerID="31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f" exitCode=0 Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.273037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.276387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"d7195d297c9d5141a71387652075a97edc794fb733f7afeadd4dd323957a1f63"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.280022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.286674 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.288517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerDied","Data":"9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.288567 4739 scope.go:117] "RemoveContainer" containerID="1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.532012 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.554732 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.368864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4"} Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.414744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3"} Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.415536 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.444604 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podStartSLOduration=6.444584571 podStartE2EDuration="6.444584571s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:45.442895127 +0000 UTC m=+1277.938616049" watchObservedRunningTime="2026-02-18 14:20:45.444584571 +0000 UTC m=+1277.940305513" Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.426233 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" path="/var/lib/kubelet/pods/e6a350a1-b153-4edb-b937-ff7ccec8d1de/volumes" Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.428937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234"} Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.429115 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" containerID="cri-o://a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" gracePeriod=30 Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.429224 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" containerID="cri-o://7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" gracePeriod=30 Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.467510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.467491901 podStartE2EDuration="7.467491901s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:46.453690827 +0000 UTC m=+1278.949411839" watchObservedRunningTime="2026-02-18 14:20:46.467491901 +0000 UTC m=+1278.963212823" Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.451520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.453605 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" containerID="cri-o://55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" gracePeriod=30 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.453611 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" containerID="cri-o://c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" gracePeriod=30 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455168 4739 generic.go:334] "Generic (PLEG): container finished" podID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerID="7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" exitCode=143 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455196 4739 generic.go:334] "Generic (PLEG): container finished" podID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerID="a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" exitCode=143 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455212 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.488422 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.488403842 podStartE2EDuration="8.488403842s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:47.474786342 +0000 UTC m=+1279.970507264" watchObservedRunningTime="2026-02-18 14:20:47.488403842 +0000 UTC m=+1279.984124764" Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.470817 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerID="55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" exitCode=0 Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.471105 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerID="c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" exitCode=143 Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.470917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554"} Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.471143 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4"} Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.310627 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.371777 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.372030 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" containerID="cri-o://2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" gracePeriod=10 Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.499478 4739 generic.go:334] "Generic (PLEG): container finished" podID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerID="0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40" exitCode=0 Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.499716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerDied","Data":"0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40"} Feb 18 14:20:51 crc kubenswrapper[4739]: I0218 14:20:51.527705 4739 generic.go:334] "Generic (PLEG): container finished" podID="a95a3e0d-f263-464b-9406-0fc51724a068" containerID="2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" exitCode=0 Feb 18 14:20:51 crc kubenswrapper[4739]: I0218 14:20:51.527769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc"} Feb 18 14:20:52 crc kubenswrapper[4739]: I0218 14:20:52.317772 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:20:57 crc kubenswrapper[4739]: I0218 14:20:57.318291 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.828736 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.947836 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.947994 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948037 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948260 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948295 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.955243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv" (OuterVolumeSpecName: "kube-api-access-h99sv") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "kube-api-access-h99sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.963882 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.964041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.964079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts" (OuterVolumeSpecName: "scripts") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.983493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data" (OuterVolumeSpecName: "config-data") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.987987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051229 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051258 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051269 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051279 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051286 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051304 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.372806 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.372877 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.373044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.374061 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.374132 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" gracePeriod=600 Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615278 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerDied","Data":"70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b"} Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615316 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615323 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618514 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" exitCode=0 Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618590 4739 scope.go:117] "RemoveContainer" containerID="a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.021266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.033047 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.112513 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:00 crc kubenswrapper[4739]: E0218 14:21:00.113079 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113104 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: E0218 14:21:00.113119 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113128 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113354 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113393 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.114343 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121603 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.122412 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.130618 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.136070 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291584 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291921 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.403747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.403865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.416866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.424786 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" path="/var/lib/kubelet/pods/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7/volumes" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.450428 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.318174 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.320923 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:02 crc kubenswrapper[4739]: E0218 14:21:02.719731 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 14:21:02 crc kubenswrapper[4739]: E0218 14:21:02.719978 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h79h8ch5d6h669h577hf9h5d5h89h666h664h548h589h659h66fh555hddh668h6h6ch5c7h687h5b8h55fhdbh7dh84hdbhc6h68bh5d9h6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngbgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2a576aa-9125-4096-8ee5-ac83d6aaee01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.819205 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.949988 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950415 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.952095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs" (OuterVolumeSpecName: "logs") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.952909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.957968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts" (OuterVolumeSpecName: "scripts") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.958022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm" (OuterVolumeSpecName: "kube-api-access-l8swm") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "kube-api-access-l8swm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.997984 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (OuterVolumeSpecName: "glance") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.006633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.055064 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068514 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068552 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068570 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068617 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" " Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068635 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068648 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068667 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.076594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data" (OuterVolumeSpecName: "config-data") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.100640 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.100794 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15") on node "crc" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.170221 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.170257 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.659428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7"} Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.659535 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.718997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.741302 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.772552 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: E0218 14:21:03.773116 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773139 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: E0218 14:21:03.773177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773187 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773691 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773718 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.775573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.777609 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.780127 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.784681 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896761 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.897073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.897415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999464 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.001723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.001911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.006678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.006685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.008345 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.008381 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.012591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.018263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.025198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.058045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.108802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.424638 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" path="/var/lib/kubelet/pods/dee39188-8dd1-45dd-afd8-ef4599d03adb/volumes" Feb 18 14:21:10 crc kubenswrapper[4739]: I0218 14:21:10.985844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:10 crc kubenswrapper[4739]: I0218 14:21:10.986346 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.050250 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.050752 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wgcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-2dhxm_openstack(3edd4390-e376-469a-b7c5-9bd7bf9dd100): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.052041 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-2dhxm" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.564882 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.565238 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7wlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h5s86_openstack(a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.567366 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h5s86" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.675564 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.686399 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.743039 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8"} Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.743125 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.746061 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.746496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"d7195d297c9d5141a71387652075a97edc794fb733f7afeadd4dd323957a1f63"} Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.748433 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-2dhxm" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.748474 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h5s86" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790093 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790736 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790783 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790885 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790938 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs" (OuterVolumeSpecName: "logs") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791719 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791749 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.798826 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts" (OuterVolumeSpecName: "scripts") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.810871 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc" (OuterVolumeSpecName: "kube-api-access-bkcvc") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "kube-api-access-bkcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.814381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6" (OuterVolumeSpecName: "kube-api-access-9zgw6") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "kube-api-access-9zgw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.815845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (OuterVolumeSpecName: "glance") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.851132 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.857915 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.861159 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.867193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.885208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data" (OuterVolumeSpecName: "config-data") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.886672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894010 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894045 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894060 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894088 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894101 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894113 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894150 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894161 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.903114 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.909634 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config" (OuterVolumeSpecName: "config") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.944095 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.944259 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b") on node "crc" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.995957 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.995993 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.996008 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.112554 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.121935 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.142631 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.174043 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.185757 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186261 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186281 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="init" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186322 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="init" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186330 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186354 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186360 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186598 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.191371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.193768 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.193869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.197873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304344 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304752 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304882 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.318474 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: i/o timeout" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407025 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.408133 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.409745 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.409791 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.411006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.412815 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.413154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.424556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.430345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.451954 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" path="/var/lib/kubelet/pods/a95a3e0d-f263-464b-9406-0fc51724a068/volumes" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.452953 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" path="/var/lib/kubelet/pods/e9d24b5d-3b30-41c2-b736-7a98e88e1da4/volumes" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.462060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.516583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.958589 4739 scope.go:117] "RemoveContainer" containerID="7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.961653 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.961800 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh97j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hm27f_openstack(51d77527-a940-4423-ac63-4a7cdf366510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.963119 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hm27f" podUID="51d77527-a940-4423-ac63-4a7cdf366510" Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.406485 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.588597 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:13 crc kubenswrapper[4739]: W0218 14:21:13.617795 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c42d996_bf46_4e69_892f_c720a9bce282.slice/crio-13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f WatchSource:0}: Error finding container 13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f: Status 404 returned error can't find the container with id 13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f Feb 18 14:21:13 crc kubenswrapper[4739]: W0218 14:21:13.624920 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3677acc3_fd05_4d33_ac6c_aa420ecce125.slice/crio-3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6 WatchSource:0}: Error finding container 3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6: Status 404 returned error can't find the container with id 3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6 Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.641235 4739 scope.go:117] "RemoveContainer" containerID="a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.783147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerStarted","Data":"13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f"} Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.796784 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6"} Feb 18 14:21:13 crc kubenswrapper[4739]: E0218 14:21:13.803044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-hm27f" podUID="51d77527-a940-4423-ac63-4a7cdf366510" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.073357 4739 scope.go:117] "RemoveContainer" containerID="2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.109147 4739 scope.go:117] "RemoveContainer" containerID="521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.174761 4739 scope.go:117] "RemoveContainer" containerID="55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.223723 4739 scope.go:117] "RemoveContainer" containerID="c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" Feb 18 14:21:14 crc kubenswrapper[4739]: W0218 14:21:14.230094 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b6ca41_d34e_4ef9_b04c_4de7a50b71ad.slice/crio-a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14 WatchSource:0}: Error finding container a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14: Status 404 returned error can't find the container with id a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14 Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.232946 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.814061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerStarted","Data":"331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.819928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.825873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.837217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.843899 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerStarted","Data":"d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.845241 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-42sfc" podStartSLOduration=14.845219109 podStartE2EDuration="14.845219109s" podCreationTimestamp="2026-02-18 14:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:14.837581835 +0000 UTC m=+1307.333302757" watchObservedRunningTime="2026-02-18 14:21:14.845219109 +0000 UTC m=+1307.340940041" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.847765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.880395 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q58nf" podStartSLOduration=5.930040861 podStartE2EDuration="35.880376155s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.600630699 +0000 UTC m=+1274.096351631" lastFinishedPulling="2026-02-18 14:21:11.550966003 +0000 UTC m=+1304.046686925" observedRunningTime="2026-02-18 14:21:14.87585355 +0000 UTC m=+1307.371574472" watchObservedRunningTime="2026-02-18 14:21:14.880376155 +0000 UTC m=+1307.376097077" Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.889903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.890539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.894871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.940780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.940750163 podStartE2EDuration="3.940750163s" podCreationTimestamp="2026-02-18 14:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:15.917914652 +0000 UTC m=+1308.413635614" watchObservedRunningTime="2026-02-18 14:21:15.940750163 +0000 UTC m=+1308.436471095" Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.982934 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.982864296 podStartE2EDuration="12.982864296s" podCreationTimestamp="2026-02-18 14:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:15.957714906 +0000 UTC m=+1308.453435848" watchObservedRunningTime="2026-02-18 14:21:15.982864296 +0000 UTC m=+1308.478585228" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.517162 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.517644 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.602679 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.603344 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.981400 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.982229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.109312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.109411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.150323 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.162910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003067 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003430 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.033095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.035031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerStarted","Data":"d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.037856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerStarted","Data":"cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.046599 4739 generic.go:334] "Generic (PLEG): container finished" podID="0c42d996-bf46-4e69-892f-c720a9bce282" containerID="331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385" exitCode=0 Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.046712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerDied","Data":"331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.061316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h5s86" podStartSLOduration=3.983007367 podStartE2EDuration="49.061294947s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:42.095283178 +0000 UTC m=+1274.591004100" lastFinishedPulling="2026-02-18 14:21:27.173570748 +0000 UTC m=+1319.669291680" observedRunningTime="2026-02-18 14:21:28.048348007 +0000 UTC m=+1320.544068939" watchObservedRunningTime="2026-02-18 14:21:28.061294947 +0000 UTC m=+1320.557015869" Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.097001 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-2dhxm" podStartSLOduration=3.671636629 podStartE2EDuration="49.096979246s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.43577806 +0000 UTC m=+1273.931498992" lastFinishedPulling="2026-02-18 14:21:26.861120687 +0000 UTC m=+1319.356841609" observedRunningTime="2026-02-18 14:21:28.082923208 +0000 UTC m=+1320.578644130" watchObservedRunningTime="2026-02-18 14:21:28.096979246 +0000 UTC m=+1320.592700188" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.069140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerStarted","Data":"13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2"} Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.108607 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hm27f" podStartSLOduration=4.286423138 podStartE2EDuration="50.1085633s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.349048685 +0000 UTC m=+1273.844769607" lastFinishedPulling="2026-02-18 14:21:27.171188847 +0000 UTC m=+1319.666909769" observedRunningTime="2026-02-18 14:21:29.093103436 +0000 UTC m=+1321.588824368" watchObservedRunningTime="2026-02-18 14:21:29.1085633 +0000 UTC m=+1321.604284222" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.617378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.779957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780008 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780338 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.786786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts" (OuterVolumeSpecName: "scripts") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.788608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.788748 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82" (OuterVolumeSpecName: "kube-api-access-55f82") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "kube-api-access-55f82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.790647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.830144 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data" (OuterVolumeSpecName: "config-data") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884293 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884330 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884343 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884356 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884367 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.890133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.986524 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.085236 4739 generic.go:334] "Generic (PLEG): container finished" podID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerID="d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489" exitCode=0 Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.085316 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerDied","Data":"d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489"} Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.089900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerDied","Data":"13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f"} Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.089940 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.090000 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.213344 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:30 crc kubenswrapper[4739]: E0218 14:21:30.213877 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.213902 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.214176 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.215260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.217950 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218369 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218812 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218983 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.228695 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.241135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.393953 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394019 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.473141 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.473644 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.478368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.478522 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.480652 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498530 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.502547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.521590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.522667 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.523432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.525620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.538301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.542864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.072862 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.108387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dff988c46-72t9g" event={"ID":"74cf9632-a7c0-4b6e-98ce-ebd6411a6594","Type":"ContainerStarted","Data":"b98cb9aafff0356094b6f04f8e15d578115ab86d26d1c69d5d1753220bf423e1"} Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.450091 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533235 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533374 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533637 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.536523 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs" (OuterVolumeSpecName: "logs") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.542006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts" (OuterVolumeSpecName: "scripts") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.547430 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg" (OuterVolumeSpecName: "kube-api-access-d67kg") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "kube-api-access-d67kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.584130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data" (OuterVolumeSpecName: "config-data") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.598841 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636364 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636411 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636423 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636434 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636466 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.126847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dff988c46-72t9g" event={"ID":"74cf9632-a7c0-4b6e-98ce-ebd6411a6594","Type":"ContainerStarted","Data":"e5c1f3bf17d3400a13171c975f2d5f673fb911dbbe512cb159d24351431b4c93"} Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.126920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerDied","Data":"1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd"} Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134719 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.174155 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7dff988c46-72t9g" podStartSLOduration=2.174137129 podStartE2EDuration="2.174137129s" podCreationTimestamp="2026-02-18 14:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:32.148066275 +0000 UTC m=+1324.643787197" watchObservedRunningTime="2026-02-18 14:21:32.174137129 +0000 UTC m=+1324.669858051" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.370400 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:32 crc kubenswrapper[4739]: E0218 14:21:32.371123 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.371150 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.371480 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.372944 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375302 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375542 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f4jrj" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375688 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.403467 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.459571 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460028 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562735 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562837 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562983 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.563057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.563723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.576017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.578180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.580668 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.831376 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.837564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.837978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:33 crc kubenswrapper[4739]: I0218 14:21:33.005871 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.213560 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerID="d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4" exitCode=0 Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.213694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerDied","Data":"d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4"} Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.417277 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:37 crc kubenswrapper[4739]: E0218 14:21:37.428954 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.226531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.226727 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" containerID="cri-o://c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227390 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" containerID="cri-o://77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227454 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" containerID="cri-o://709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"5b6162e9273de8f9cb959ff7ffa10674372c041cc24173e5449c3947335f5a9f"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"e138c9fb56a1b2659323325dd24bef442707b7c5c27da58fb1ff15c79ac1c701"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"870b5536b541d29a1685d8df33006e3196991a5a62e728ad1a4c16a4398901aa"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230498 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.284908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65fbfb5b48-rchlc" podStartSLOduration=6.28489074 podStartE2EDuration="6.28489074s" podCreationTimestamp="2026-02-18 14:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:38.269389845 +0000 UTC m=+1330.765110777" watchObservedRunningTime="2026-02-18 14:21:38.28489074 +0000 UTC m=+1330.780611662" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.646222 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714506 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.718992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.719020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp" (OuterVolumeSpecName: "kube-api-access-s7wlp") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "kube-api-access-s7wlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.747606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817136 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817163 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817174 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.101137 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.124961 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.125345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.133766 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts" (OuterVolumeSpecName: "scripts") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.133911 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv" (OuterVolumeSpecName: "kube-api-access-ngbgv") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "kube-api-access-ngbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227927 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227945 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.228870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.229138 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242061 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" exitCode=0 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242128 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" exitCode=2 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242145 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" exitCode=0 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242216 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"012dc8f477dfe3bd25f7fe5decf6c00cb3c850250a18972e074f41544b597e70"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242334 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.256465 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.257748 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerDied","Data":"d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.260493 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.270068 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.311705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.324399 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data" (OuterVolumeSpecName: "config-data") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330546 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330585 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330598 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330611 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330622 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.334660 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.357726 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.455815 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.457227 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457277 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457302 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.457782 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457816 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457836 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.458032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458056 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458069 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458220 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458236 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458515 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458543 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.459063 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.459085 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462475 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462510 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462880 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462901 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.463089 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.517542 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518135 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518156 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518191 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518199 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518217 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518225 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518251 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518259 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518467 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518503 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518516 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518527 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.519666 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523463 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523813 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523935 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xnq4d" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.532002 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.533764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.536680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.618539 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646938 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647520 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.676239 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750976 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.757059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.759591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.768828 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.770515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.770768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.771979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.774013 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.777147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.795610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.796666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.834783 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853239 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.867045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.896279 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.896843 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.908993 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.917833 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.921079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.924851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.924849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.931642 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.955761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.955845 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.964971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.967725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.969258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.973551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.974349 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.976566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.976694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.979074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.987376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.987618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.058887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059344 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059490 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.098276 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.105433 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:40 crc kubenswrapper[4739]: E0218 14:21:40.106332 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-h2l2h log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="89478c1f-2d02-4e05-ab0b-e257a0dc3d08" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167827 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167983 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.170134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.170716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.181869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.226036 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.226607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.227094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.228566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.233671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.244365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.295098 4739 generic.go:334] "Generic (PLEG): container finished" podID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerID="cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.295297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerDied","Data":"cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.302634 4739 generic.go:334] "Generic (PLEG): container finished" podID="b3697715-3f94-4086-99ab-65a492bd7542" containerID="615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.302707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerDied","Data":"615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.308294 4739 generic.go:334] "Generic (PLEG): container finished" podID="51d77527-a940-4423-ac63-4a7cdf366510" containerID="13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.308408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerDied","Data":"13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.310527 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.361879 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.425464 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" path="/var/lib/kubelet/pods/e2a576aa-9125-4096-8ee5-ac83d6aaee01/volumes" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.476639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.485622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486536 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486744 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486936 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.487616 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.495426 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.505720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509426 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509489 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509504 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.511822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h" (OuterVolumeSpecName: "kube-api-access-h2l2h") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "kube-api-access-h2l2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.513969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts" (OuterVolumeSpecName: "scripts") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.514079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.519613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data" (OuterVolumeSpecName: "config-data") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.611735 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614488 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614514 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614532 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.650724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.800629 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:40 crc kubenswrapper[4739]: W0218 14:21:40.804302 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e15713_5e2e_4ede_9a0a_231e49dc0deb.slice/crio-ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b WatchSource:0}: Error finding container ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b: Status 404 returned error can't find the container with id ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.979644 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.321564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"27fbf6ed19e0af0f8d03849dc013e6a9d725589e5f8a24b612618b6cde8be6d0"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.322792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"72a6cc41691def01dbec482f45d1d622290d61beb07ccae66f71bf732054c3a4"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.324334 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.324364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"e6e7dfb42369260f31fbf7b2c8b3ddee88d4d1f06f45a187f08b311b7e5a41ef"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.325831 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" exitCode=0 Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.325938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.326487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.326567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerStarted","Data":"ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.591497 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.611629 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.623926 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.630108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.634098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.634264 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.635263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761391 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864232 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.865034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.865535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.874145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.874489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.876805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.877343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.886816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.961243 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.342991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerDied","Data":"b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.343289 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.345022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerStarted","Data":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.345169 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348728 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.352432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerDied","Data":"7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.352515 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.369781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" podStartSLOduration=3.369761921 podStartE2EDuration="3.369761921s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:42.363814989 +0000 UTC m=+1334.859535921" watchObservedRunningTime="2026-02-18 14:21:42.369761921 +0000 UTC m=+1334.865482853" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.389306 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b4b66db68-ntx7n" podStartSLOduration=3.389283558 podStartE2EDuration="3.389283558s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:42.382874255 +0000 UTC m=+1334.878595177" watchObservedRunningTime="2026-02-18 14:21:42.389283558 +0000 UTC m=+1334.885004480" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.391058 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.402130 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.442296 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89478c1f-2d02-4e05-ab0b-e257a0dc3d08" path="/var/lib/kubelet/pods/89478c1f-2d02-4e05-ab0b-e257a0dc3d08/volumes" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482679 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482734 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482812 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482900 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.483054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.483790 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.498389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.501840 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts" (OuterVolumeSpecName: "scripts") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.508303 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j" (OuterVolumeSpecName: "kube-api-access-vh97j") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "kube-api-access-vh97j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.542744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586343 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587214 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587241 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587257 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587269 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.588857 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data" (OuterVolumeSpecName: "config-data") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.595007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr" (OuterVolumeSpecName: "kube-api-access-vw7wr") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "kube-api-access-vw7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.649073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config" (OuterVolumeSpecName: "config") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.650588 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689297 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689335 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689349 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689357 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.343913 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.379269 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380871 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380897 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380910 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380917 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380951 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381202 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381238 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381254 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.385577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.387770 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.387922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerDied","Data":"ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2"} Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390076 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.397967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.398160 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.478898 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574199 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.576682 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.577204 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.577259 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.580488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.580963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581043 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv" (OuterVolumeSpecName: "kube-api-access-6wgcv") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "kube-api-access-6wgcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.646869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.667315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.684705 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686257 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686404 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686617 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.697935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698539 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698645 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.690055 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.699545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.704762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.719642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.722844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.728600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.742229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.749184 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.757286 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.819949 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.834040 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.835966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9bgt9" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844521 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844780 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906121 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906259 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908817 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909318 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data" (OuterVolumeSpecName: "config-data") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.915099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.932110 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.946498 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.950684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.953176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.960960 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-crc55" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.971120 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.002176 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.003177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014219 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014795 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.017990 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.025080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.025503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.026559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.028102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.042551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.053507 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.064854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.096230 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118948 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118988 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119097 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119238 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119364 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.147842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.150755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.154861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.161142 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.162139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.163936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.166206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.183895 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.202647 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221308 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222671 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222845 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223482 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224267 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.245566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.251566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.326766 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327116 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327580 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.328242 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.328634 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.346859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.347597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.436510 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" containerID="cri-o://00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" gracePeriod=10 Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8c8032c3a1234bf623502d6fafa31158115ef887ed497b5adb6540ed67e79d70"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"b1fdc945ce1e2ca101cf0efe471251f683dbf0d7225dedc33750f002d3546bd5"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"9184f0b0a885e5ad488bc2df04052a3a06116040f41b71728ada9eb8430b1f38"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.482100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.493698 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.839913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.159493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:45 crc kubenswrapper[4739]: W0218 14:21:45.182572 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9387c384_203f_40d3_91d1_9e487b283231.slice/crio-cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b WatchSource:0}: Error finding container cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b: Status 404 returned error can't find the container with id cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.436858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.492758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"8d637634a7b0c9f9482e853eddb3d4a410c2953d9ecb3bf1fd2eb965271b6f5d"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.514459 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerStarted","Data":"cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.524891 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" podStartSLOduration=4.002766899 podStartE2EDuration="6.524846051s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="2026-02-18 14:21:40.497175357 +0000 UTC m=+1332.992896279" lastFinishedPulling="2026-02-18 14:21:43.019254509 +0000 UTC m=+1335.514975431" observedRunningTime="2026-02-18 14:21:45.514831596 +0000 UTC m=+1338.010552528" watchObservedRunningTime="2026-02-18 14:21:45.524846051 +0000 UTC m=+1338.020566973" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528377 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" exitCode=0 Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528541 4739 scope.go:117] "RemoveContainer" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528698 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.543013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"2b92b5c4a773c7115b1ef12bec885a140f238d803824f16fa02f5eb967ccfb46"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.543067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"e18ab58b38a4e0188d42f9ddcbb74eefa95de7e39125e3c317645fe197ae7d56"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.553011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.567557 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"ce4fdbc97460f6bbc0626886a9d5a7b302054b8bff4942cc4be8129f35b706ac"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.594500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.595070 4739 scope.go:117] "RemoveContainer" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.607860 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-765d88ff9c-smd7n" podStartSLOduration=4.216942487 podStartE2EDuration="6.607840505s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="2026-02-18 14:21:40.667639871 +0000 UTC m=+1333.163360793" lastFinishedPulling="2026-02-18 14:21:43.058537889 +0000 UTC m=+1335.554258811" observedRunningTime="2026-02-18 14:21:45.590998636 +0000 UTC m=+1338.086719558" watchObservedRunningTime="2026-02-18 14:21:45.607840505 +0000 UTC m=+1338.103561427" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.622619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.622670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623487 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623537 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623563 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623654 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.644691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s" (OuterVolumeSpecName: "kube-api-access-bb87s") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "kube-api-access-bb87s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.680782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.720686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.725711 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.742611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:45 crc kubenswrapper[4739]: W0218 14:21:45.788663 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9337767c_12ba_460b_854a_5c2e69db4a5c.slice/crio-fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef WatchSource:0}: Error finding container fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef: Status 404 returned error can't find the container with id fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.859912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config" (OuterVolumeSpecName: "config") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.863118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.880775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.881615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.883734 4739 scope.go:117] "RemoveContainer" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: E0218 14:21:45.889567 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": container with ID starting with 00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f not found: ID does not exist" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.889611 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} err="failed to get container status \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": rpc error: code = NotFound desc = could not find container \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": container with ID starting with 00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f not found: ID does not exist" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.889638 4739 scope.go:117] "RemoveContainer" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: E0218 14:21:45.894591 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": container with ID starting with 40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5 not found: ID does not exist" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.894783 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5"} err="failed to get container status \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": rpc error: code = NotFound desc = could not find container \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": container with ID starting with 40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5 not found: ID does not exist" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942052 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942287 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942297 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942305 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.946524 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.054609 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.295432 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.309781 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.432096 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" path="/var/lib/kubelet/pods/d5e15713-5e2e-4ede-9a0a-231e49dc0deb/volumes" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.458447 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.647987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"0aa6f9d0113c0aad83b0711a9f1f95a0f189e2ee86406cef9587f35ef42914d9"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.671801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"92e077d54516a226953141815b27472b6e615b27ebdcfef077823d82e467f49d"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.682690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.700169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerStarted","Data":"fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.711635 4739 generic.go:334] "Generic (PLEG): container finished" podID="9387c384-203f-40d3-91d1-9e487b283231" containerID="a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435" exitCode=0 Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.711908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerDied","Data":"a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.748251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"5ff2372362de935a341df81a43764833bc8c8d62279f3f1075d4f3ba99ab0802"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.750571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.750618 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.805725 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fccfc9568-dvccq" podStartSLOduration=3.805706116 podStartE2EDuration="3.805706116s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:46.775006344 +0000 UTC m=+1339.270727276" watchObservedRunningTime="2026-02-18 14:21:46.805706116 +0000 UTC m=+1339.301427038" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.478749 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604184 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604632 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605334 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605479 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.624859 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7" (OuterVolumeSpecName: "kube-api-access-4pvw7") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "kube-api-access-4pvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.648368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.651909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.656170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config" (OuterVolumeSpecName: "config") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.658163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.663936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.710663 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.710918 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711035 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711136 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711224 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711314 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758684 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerDied","Data":"cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758735 4739 scope.go:117] "RemoveContainer" containerID="a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758905 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.765890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769488 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769512 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.771907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.774277 4739 generic.go:334] "Generic (PLEG): container finished" podID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerID="674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b" exitCode=0 Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.775209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.805499 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cb887488-w2vb4" podStartSLOduration=4.8054794 podStartE2EDuration="4.8054794s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:47.78822117 +0000 UTC m=+1340.283942112" watchObservedRunningTime="2026-02-18 14:21:47.8054794 +0000 UTC m=+1340.301200322" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.877560 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.904642 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.440743 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9387c384-203f-40d3-91d1-9e487b283231" path="/var/lib/kubelet/pods/9387c384-203f-40d3-91d1-9e487b283231/volumes" Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.789884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerStarted","Data":"52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47"} Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.834101 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podStartSLOduration=5.834080539 podStartE2EDuration="5.834080539s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:48.82747277 +0000 UTC m=+1341.323193702" watchObservedRunningTime="2026-02-18 14:21:48.834080539 +0000 UTC m=+1341.329801461" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.483779 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.804951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8"} Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805146 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" containerID="cri-o://f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" gracePeriod=30 Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805111 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" containerID="cri-o://51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" gracePeriod=30 Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.812181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a"} Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.834060 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.834035267 podStartE2EDuration="5.834035267s" podCreationTimestamp="2026-02-18 14:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:49.82866214 +0000 UTC m=+1342.324383062" watchObservedRunningTime="2026-02-18 14:21:49.834035267 +0000 UTC m=+1342.329756189" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836270 4739 generic.go:334] "Generic (PLEG): container finished" podID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerID="f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" exitCode=0 Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836972 4739 generic.go:334] "Generic (PLEG): container finished" podID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerID="51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" exitCode=143 Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837334 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.929884 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930361 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930380 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930412 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930435 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930441 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930669 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930693 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.932000 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.940513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.940698 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.963730 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.005183 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121276 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122214 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122802 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs" (OuterVolumeSpecName: "logs") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.127220 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts" (OuterVolumeSpecName: "scripts") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.127341 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4" (OuterVolumeSpecName: "kube-api-access-25qv4") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "kube-api-access-25qv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.197614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.222740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data" (OuterVolumeSpecName: "config-data") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224232 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224336 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.225870 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.232580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.233039 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234064 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234091 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234106 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234117 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.237663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.241411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.243640 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.248402 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.249395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.253081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.315959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.341264 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.860517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096"} Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.866122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.979601 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.038681 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049098 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: E0218 14:21:52.049678 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049698 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: E0218 14:21:52.049739 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049933 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049975 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.051181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.054300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.062829 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.063018 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.069647 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.091617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.095729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.117520 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198837 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.207314 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.210521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.211545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.221545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.221944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.224311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.227804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.265058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.435808 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" path="/var/lib/kubelet/pods/f0bb43b5-4e4b-4074-ba67-59ff0d726fab/volumes" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.691651 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.831345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.848131 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.912046 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"961d8e82bef200408c26b76fd31c29fe20ffc075389e5f00afb2ee92ae0f1189"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.912096 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"67e947ef3b29bcee78454f4f059e18f91251978cb6401673d5658e61c52bbcbd"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.913657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.914928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.948643 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.200093226 podStartE2EDuration="11.948622216s" podCreationTimestamp="2026-02-18 14:21:41 +0000 UTC" firstStartedPulling="2026-02-18 14:21:43.600436487 +0000 UTC m=+1336.096157409" lastFinishedPulling="2026-02-18 14:21:51.348965477 +0000 UTC m=+1343.844686399" observedRunningTime="2026-02-18 14:21:52.938082647 +0000 UTC m=+1345.433803579" watchObservedRunningTime="2026-02-18 14:21:52.948622216 +0000 UTC m=+1345.444343148" Feb 18 14:21:53 crc kubenswrapper[4739]: W0218 14:21:53.139326 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54fd1c90_48dd_4ae7_b2db_d80aa5f14a24.slice/crio-d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e WatchSource:0}: Error finding container d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e: Status 404 returned error can't find the container with id d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.961795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"609f5a32a8670c7d32b7ced94f4a84aabdf37ad61537acc306cfcce6060bf2f3"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.962371 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.979686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.996043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"9d23b6f46048d80401e2ee78e9cfb18970e6b85639b275b61df775a20a28387f"} Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.009828 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.053338665 podStartE2EDuration="11.009812334s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="2026-02-18 14:21:45.825379148 +0000 UTC m=+1338.321100070" lastFinishedPulling="2026-02-18 14:21:50.781852817 +0000 UTC m=+1343.277573739" observedRunningTime="2026-02-18 14:21:54.003816331 +0000 UTC m=+1346.499537263" watchObservedRunningTime="2026-02-18 14:21:54.009812334 +0000 UTC m=+1346.505533256" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.052513 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77cbbcb957-6xzzv" podStartSLOduration=4.052496251 podStartE2EDuration="4.052496251s" podCreationTimestamp="2026-02-18 14:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:54.049754231 +0000 UTC m=+1346.545475163" watchObservedRunningTime="2026-02-18 14:21:54.052496251 +0000 UTC m=+1346.548217173" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.203425 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.487164 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.631946 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.639654 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" containerID="cri-o://0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" gracePeriod=10 Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.009957 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerID="0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" exitCode=0 Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.010242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3"} Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.013782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"b9c2bcf20b4ec25dfd07a3c36e2b1c886ee7fdef5720dd054cc18f6812ece15e"} Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.013814 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.014011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.035098 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.035083527 podStartE2EDuration="4.035083527s" podCreationTimestamp="2026-02-18 14:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:55.032468411 +0000 UTC m=+1347.528189333" watchObservedRunningTime="2026-02-18 14:21:55.035083527 +0000 UTC m=+1347.530804449" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.548416 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700460 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700817 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.743151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc" (OuterVolumeSpecName: "kube-api-access-w9dzc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "kube-api-access-w9dzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.790353 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config" (OuterVolumeSpecName: "config") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.796416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804636 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: W0218 14:21:55.804755 4739 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f4b54fe6-91fa-4ba1-9a4e-135277494a27/volumes/kubernetes.io~configmap/dns-svc Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805158 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805178 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805187 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805200 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.806290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.852247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.907328 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.907378 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.030991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7"} Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.031053 4739 scope.go:117] "RemoveContainer" containerID="0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.031216 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.081994 4739 scope.go:117] "RemoveContainer" containerID="31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.082168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.097401 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.354359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.431263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" path="/var/lib/kubelet/pods/f4b54fe6-91fa-4ba1-9a4e-135277494a27/volumes" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.434229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.523395 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.524324 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" containerID="cri-o://7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" gracePeriod=30 Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.524381 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" containerID="cri-o://2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" gracePeriod=30 Feb 18 14:21:57 crc kubenswrapper[4739]: I0218 14:21:57.051353 4739 generic.go:334] "Generic (PLEG): container finished" podID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerID="7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" exitCode=143 Feb 18 14:21:57 crc kubenswrapper[4739]: I0218 14:21:57.052722 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd"} Feb 18 14:21:59 crc kubenswrapper[4739]: I0218 14:21:59.383424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:21:59 crc kubenswrapper[4739]: I0218 14:21:59.448395 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093310 4739 generic.go:334] "Generic (PLEG): container finished" podID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerID="2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" exitCode=0 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093600 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" containerID="cri-o://78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" gracePeriod=30 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135"} Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.094407 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" containerID="cri-o://6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" gracePeriod=30 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.309007 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.547650 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.636911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.638185 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.638327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs" (OuterVolumeSpecName: "logs") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.661648 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.674612 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd" (OuterVolumeSpecName: "kube-api-access-dwwjd") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "kube-api-access-dwwjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.677109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.707736 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data" (OuterVolumeSpecName: "config-data") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741604 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741646 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741656 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741664 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741674 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.107717 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" exitCode=0 Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.107752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"e6e7dfb42369260f31fbf7b2c8b3ddee88d4d1f06f45a187f08b311b7e5a41ef"} Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112647 4739 scope.go:117] "RemoveContainer" containerID="2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112870 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.167295 4739 scope.go:117] "RemoveContainer" containerID="7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.168252 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.179632 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:22:02 crc kubenswrapper[4739]: I0218 14:22:02.424669 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" path="/var/lib/kubelet/pods/064975cb-44bb-44b1-8d99-ea09a947b8b8/volumes" Feb 18 14:22:03 crc kubenswrapper[4739]: I0218 14:22:03.027303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.749928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.903907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.984845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.121043 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.178073 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" exitCode=0 Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179169 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179504 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"0aa6f9d0113c0aad83b0711a9f1f95a0f189e2ee86406cef9587f35ef42914d9"} Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179586 4739 scope.go:117] "RemoveContainer" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.212598 4739 scope.go:117] "RemoveContainer" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.246494 4739 scope.go:117] "RemoveContainer" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.247032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": container with ID starting with 6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5 not found: ID does not exist" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247060 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} err="failed to get container status \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": rpc error: code = NotFound desc = could not find container \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": container with ID starting with 6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5 not found: ID does not exist" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247081 4739 scope.go:117] "RemoveContainer" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.247311 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": container with ID starting with 78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530 not found: ID does not exist" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} err="failed to get container status \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": rpc error: code = NotFound desc = could not find container \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": container with ID starting with 78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530 not found: ID does not exist" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279397 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279421 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279594 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.281494 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.288284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm" (OuterVolumeSpecName: "kube-api-access-vb4bm") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "kube-api-access-vb4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.290586 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts" (OuterVolumeSpecName: "scripts") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.315603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383357 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383365 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383373 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.386756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477228 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477603 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data" (OuterVolumeSpecName: "config-data") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.488083 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.488332 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.546532 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.563110 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.590817 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591600 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591663 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="init" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591713 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="init" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591781 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591896 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.592023 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592076 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.592155 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592209 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592480 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592543 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592613 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592673 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592731 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.593878 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.606853 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.606930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693289 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693636 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.694931 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.695249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.695405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.798789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.805829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.843008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.996388 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:06 crc kubenswrapper[4739]: I0218 14:22:06.425778 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" path="/var/lib/kubelet/pods/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d/volumes" Feb 18 14:22:06 crc kubenswrapper[4739]: I0218 14:22:06.547383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:06 crc kubenswrapper[4739]: W0218 14:22:06.548438 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1a7d36_7f60_40b3_82ee_2fd64f780bc4.slice/crio-2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748 WatchSource:0}: Error finding container 2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748: Status 404 returned error can't find the container with id 2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748 Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.212583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748"} Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.908764 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.910578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.913188 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2g9nj" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.914558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.914932 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.921652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.166361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.169467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.169569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.171020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.171746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.182523 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.187430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.210360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.234206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"c2299ede957a85075e4ce2e2081142ad7971d798d620aa7a55a105b0534976ff"} Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.234263 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.236467 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.282352 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.315537 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.340647 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.340625692 podStartE2EDuration="3.340625692s" podCreationTimestamp="2026-02-18 14:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:08.27252026 +0000 UTC m=+1360.768241202" watchObservedRunningTime="2026-02-18 14:22:08.340625692 +0000 UTC m=+1360.836346614" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.377362 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.392955 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.393076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483082 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: E0218 14:22:08.542962 4739 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 14:22:08 crc kubenswrapper[4739]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_466767d7-e9c0-4e67-bd56-9c4d53711acb_0(66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185" Netns:"/var/run/netns/7cf92b63-3c80-4fe0-82eb-fafb366a05e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185;K8S_POD_UID=466767d7-e9c0-4e67-bd56-9c4d53711acb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/466767d7-e9c0-4e67-bd56-9c4d53711acb]: expected pod UID "466767d7-e9c0-4e67-bd56-9c4d53711acb" but got "6699e575-f077-433c-a257-f65f329d6e69" from Kube API Feb 18 14:22:08 crc kubenswrapper[4739]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 14:22:08 crc kubenswrapper[4739]: > Feb 18 14:22:08 crc kubenswrapper[4739]: E0218 14:22:08.543028 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 14:22:08 crc kubenswrapper[4739]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_466767d7-e9c0-4e67-bd56-9c4d53711acb_0(66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185" Netns:"/var/run/netns/7cf92b63-3c80-4fe0-82eb-fafb366a05e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185;K8S_POD_UID=466767d7-e9c0-4e67-bd56-9c4d53711acb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/466767d7-e9c0-4e67-bd56-9c4d53711acb]: expected pod UID "466767d7-e9c0-4e67-bd56-9c4d53711acb" but got "6699e575-f077-433c-a257-f65f329d6e69" from Kube API Feb 18 14:22:08 crc kubenswrapper[4739]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 14:22:08 crc kubenswrapper[4739]: > pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.589591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.592000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.592030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.606918 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.792477 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.264772 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.312794 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.327999 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.412534 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.426011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.443673 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn" (OuterVolumeSpecName: "kube-api-access-8dfpn") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "kube-api-access-8dfpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.445422 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.445704 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516233 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516265 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516274 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516282 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.276670 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6699e575-f077-433c-a257-f65f329d6e69","Type":"ContainerStarted","Data":"5628848a561d84934ef8f4ff8e31d05cc9adb299e3e936d904e4e46f12cca2c1"} Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.276705 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.300437 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.422766 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" path="/var/lib/kubelet/pods/466767d7-e9c0-4e67-bd56-9c4d53711acb/volumes" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.781634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.783551 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.787804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gcstc" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.788332 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.788373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.831578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.896982 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.904363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.907594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.924128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.932514 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.933087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.934328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.935969 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.941066 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.943345 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.944993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.948705 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.954362 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.964753 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.965886 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:10.998735 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015191 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015323 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015406 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.121478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.123847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.124064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.124977 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126595 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126743 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.127063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.127166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.128593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.128883 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.131710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.131957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.132391 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.136954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.137248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.139827 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.152813 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.153842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.161995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.189622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.193849 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.195214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.340191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.353399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.364185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.665157 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.677690 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.681792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.682021 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.682179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.686023 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750477 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.853076 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856464 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.857476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.864247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.864675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.865625 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.865772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.866892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.868696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.882869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.972660 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.010650 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.205670 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.375720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerStarted","Data":"93bc0594ac2cecd77e6e563c92943e92190081b3c713021d84dd28fc365b4b5c"} Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.444087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.444132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerStarted","Data":"6ad816951b3fbde1a7196efd13d5a85b80b684bb992e88915048b9d53fd1030f"} Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.542213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.175129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:13 crc kubenswrapper[4739]: W0218 14:22:13.183266 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac478be7_1c16_4a7f_a2d2_618cfe76c3d3.slice/crio-f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e WatchSource:0}: Error finding container f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e: Status 404 returned error can't find the container with id f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.433211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.436899 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerStarted","Data":"d5d149d08742d33f66584c180e4bcc703eac2ef7429ac5a118311ff5b7b3d10b"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.439741 4739 generic.go:334] "Generic (PLEG): container finished" podID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerID="8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7" exitCode=0 Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.439792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.444412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerStarted","Data":"d6095d355750dd1b26e4a1ff757c91ef13850ef0b2d9531d6b4d28aeda570b18"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.451484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerStarted","Data":"82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.451720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.515816 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8c9d795d5-hcnvm" podStartSLOduration=3.515792127 podStartE2EDuration="3.515792127s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:13.488133302 +0000 UTC m=+1365.983854224" watchObservedRunningTime="2026-02-18 14:22:13.515792127 +0000 UTC m=+1366.011513059" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.338944 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.490874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"d33ba6d2fba2c16b217add56ea86461084ffe6ea392032a84c7ade474f0d269f"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.490922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"152e44b826638725bb05d54cee25cb071243e222f5966d179642cf8cc599da0e"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.492251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.492289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.541083 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-668fffc447-mjpk7" podStartSLOduration=3.541064603 podStartE2EDuration="3.541064603s" podCreationTimestamp="2026-02-18 14:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:14.534742934 +0000 UTC m=+1367.030463856" watchObservedRunningTime="2026-02-18 14:22:14.541064603 +0000 UTC m=+1367.036785525" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.551683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerStarted","Data":"38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.551740 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.594603 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podStartSLOduration=4.594578139 podStartE2EDuration="4.594578139s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:14.572825992 +0000 UTC m=+1367.068546924" watchObservedRunningTime="2026-02-18 14:22:14.594578139 +0000 UTC m=+1367.090299071" Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.201000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.205791 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" containerID="cri-o://fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206311 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" containerID="cri-o://8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206384 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" containerID="cri-o://8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206433 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" containerID="cri-o://1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.457226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587220 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" exitCode=0 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587248 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" exitCode=2 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096"} Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a"} Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.659926 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.662200 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.684133 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.686040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.699422 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.701129 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.712272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.721809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.733571 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845786 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845863 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.948879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.964181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.976016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.977316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.979856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.980918 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.982272 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.986138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.000207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.013069 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.021715 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-668fffc447-mjpk7" podUID="ac478be7-1c16-4a7f-a2d2-618cfe76c3d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189192 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189999 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.197687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.284867 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.348424 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.610901 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" exitCode=0 Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.611179 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" exitCode=0 Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.610953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432"} Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.611223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091"} Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.143361 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.164465 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.189138 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.191191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.196908 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.197144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.208019 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.221282 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.223290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.230579 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.245602 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.245863 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.269499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.314965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315280 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.367891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.368089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.368362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.384383 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.388000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.389100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420371 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.427166 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.427543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.428995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.430315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.431199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.458381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.557874 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.573249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.342421 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.350567 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.528522 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.528785 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" containerID="cri-o://52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" gracePeriod=10 Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.539823 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.540167 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb887488-w2vb4" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" containerID="cri-o://dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" gracePeriod=30 Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.540384 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb887488-w2vb4" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" containerID="cri-o://8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" gracePeriod=30 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.022962 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.032306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.667338 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerID="8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" exitCode=0 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.667398 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01"} Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.669975 4739 generic.go:334] "Generic (PLEG): container finished" podID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerID="52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" exitCode=0 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.670107 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47"} Feb 18 14:22:24 crc kubenswrapper[4739]: I0218 14:22:24.483207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Feb 18 14:22:28 crc kubenswrapper[4739]: I0218 14:22:28.759036 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerID="dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" exitCode=0 Feb 18 14:22:28 crc kubenswrapper[4739]: I0218 14:22:28.759680 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d"} Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.771659 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.771814 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b8h678hffh68dhb5h656h679h666h699h5dch5fbh5cdh654h58fh9h5b4h9ch5fdh5b7h5b4h584h698h665h9dh8dh5bfh6dh59dh65fh594h56fhf8q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(6699e575-f077-433c-a257-f65f329d6e69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.773171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.282427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461310 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461816 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461841 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464825 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.465002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.466672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.469105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.473757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8" (OuterVolumeSpecName: "kube-api-access-xblb8") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "kube-api-access-xblb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.482077 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts" (OuterVolumeSpecName: "scripts") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.483788 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.546848 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572123 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572702 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572720 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572730 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572744 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.646607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.674343 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.775988 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data" (OuterVolumeSpecName: "config-data") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.777949 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.824272 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.825230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8c8032c3a1234bf623502d6fafa31158115ef887ed497b5adb6540ed67e79d70"} Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.825284 4739 scope.go:117] "RemoveContainer" containerID="8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" Feb 18 14:22:29 crc kubenswrapper[4739]: E0218 14:22:29.830377 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.895704 4739 scope.go:117] "RemoveContainer" containerID="8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.923608 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.947953 4739 scope.go:117] "RemoveContainer" containerID="1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.978488 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.006529 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007218 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007245 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007262 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007293 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007302 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007322 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007330 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007633 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007670 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007704 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007718 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.010330 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.014473 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.014605 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.015725 4739 scope.go:117] "RemoveContainer" containerID="fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.030530 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.204754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.205120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.208281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.208994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.211431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.211523 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.233129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.338019 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.358492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.460559 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" path="/var/lib/kubelet/pods/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc/volumes" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.497709 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.546076 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623310 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623334 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623387 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623587 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.686010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.704042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2" (OuterVolumeSpecName: "kube-api-access-ghkr2") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "kube-api-access-ghkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.729850 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2" (OuterVolumeSpecName: "kube-api-access-ltdp2") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "kube-api-access-ltdp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744504 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744556 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744572 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.937562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerStarted","Data":"34402e3be46581b4f11650c5f4f2ec4f1afe7d82b3230635fe9430959d1f9c69"} Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"92e077d54516a226953141815b27472b6e615b27ebdcfef077823d82e467f49d"} Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976193 4739 scope.go:117] "RemoveContainer" containerID="8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.007786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config" (OuterVolumeSpecName: "config") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008081 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74f6568664-l6ffq" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" containerID="cri-o://cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" gracePeriod=60 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerStarted","Data":"cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008227 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.055932 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.065196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.067362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.067760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.078003 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74f6568664-l6ffq" podStartSLOduration=4.160515717 podStartE2EDuration="21.077976615s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="2026-02-18 14:22:12.462520908 +0000 UTC m=+1364.958241830" lastFinishedPulling="2026-02-18 14:22:29.379981806 +0000 UTC m=+1381.875702728" observedRunningTime="2026-02-18 14:22:31.054489204 +0000 UTC m=+1383.550210126" watchObservedRunningTime="2026-02-18 14:22:31.077976615 +0000 UTC m=+1383.573697537" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.084977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.116462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerStarted","Data":"311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.124637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.116599 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6b54c68f9b-f929d" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" containerID="cri-o://311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" gracePeriod=60 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.172682 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.172736 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.247727 4739 scope.go:117] "RemoveContainer" containerID="dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.267198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.268512 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.276584 4739 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.302897 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:31 crc kubenswrapper[4739]: W0218 14:22:31.313004 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54b11ed9_a528_468d_ad77_89ee83d042c5.slice/crio-3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1 WatchSource:0}: Error finding container 3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1: Status 404 returned error can't find the container with id 3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.336885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.370007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: W0218 14:22:31.371329 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4 WatchSource:0}: Error finding container a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4: Status 404 returned error can't find the container with id a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.381727 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.384302 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.393192 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config" (OuterVolumeSpecName: "config") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.394946 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.419023 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.428263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.437540 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6b54c68f9b-f929d" podStartSLOduration=4.518114306 podStartE2EDuration="21.437521334s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="2026-02-18 14:22:12.460557708 +0000 UTC m=+1364.956278640" lastFinishedPulling="2026-02-18 14:22:29.379964756 +0000 UTC m=+1381.875685668" observedRunningTime="2026-02-18 14:22:31.145585324 +0000 UTC m=+1383.641306266" watchObservedRunningTime="2026-02-18 14:22:31.437521334 +0000 UTC m=+1383.933242256" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.480325 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.486537 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.575666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.588368 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.632871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.640577 4739 scope.go:117] "RemoveContainer" containerID="52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.647421 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.763959 4739 scope.go:117] "RemoveContainer" containerID="674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.810921 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.822951 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.182645 4739 generic.go:334] "Generic (PLEG): container finished" podID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerID="311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" exitCode=0 Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.182812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerDied","Data":"311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.187480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.187521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"8eee24ec7f9accbd61a3e88a575fabd5b156dc4338ac144c30c542bf27a434fc"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.188939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerStarted","Data":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerStarted","Data":"a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerStarted","Data":"12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerStarted","Data":"182afb94ab91cf9899a4110a4be4e76e5c04c7d5630670036fcfd2f21cbc8a5f"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.206245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"f49f9c840da6b7b1c2c162adfd6ff58755e7165a8c2d9b23a26c34f3222084fc"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.215744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerStarted","Data":"b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.215797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerStarted","Data":"3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.216150 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.223951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerStarted","Data":"783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.224088 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.225562 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podStartSLOduration=15.225547105 podStartE2EDuration="15.225547105s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.20902085 +0000 UTC m=+1384.704741762" watchObservedRunningTime="2026-02-18 14:22:32.225547105 +0000 UTC m=+1384.721268037" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.230597 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerID="cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" exitCode=0 Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.230660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerDied","Data":"cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.236536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59f4cc7b48-2kzkr" podStartSLOduration=12.236515301 podStartE2EDuration="12.236515301s" podCreationTimestamp="2026-02-18 14:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.230352656 +0000 UTC m=+1384.726073578" watchObservedRunningTime="2026-02-18 14:22:32.236515301 +0000 UTC m=+1384.732236233" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.274002 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" podStartSLOduration=12.273975693 podStartE2EDuration="12.273975693s" podCreationTimestamp="2026-02-18 14:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.26273456 +0000 UTC m=+1384.758455492" watchObservedRunningTime="2026-02-18 14:22:32.273975693 +0000 UTC m=+1384.769696615" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.345487 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-cf66499c9-k855m" podStartSLOduration=15.3454632 podStartE2EDuration="15.3454632s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.289955865 +0000 UTC m=+1384.785676787" watchObservedRunningTime="2026-02-18 14:22:32.3454632 +0000 UTC m=+1384.841184142" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.358731 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-577d8f6468-htsrs" podStartSLOduration=15.358708923 podStartE2EDuration="15.358708923s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.342470055 +0000 UTC m=+1384.838190987" watchObservedRunningTime="2026-02-18 14:22:32.358708923 +0000 UTC m=+1384.854429855" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.478880 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" path="/var/lib/kubelet/pods/7e8a55f3-28f4-46da-bc87-6d16902b2dba/volumes" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.479728 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" path="/var/lib/kubelet/pods/9337767c-12ba-460b-854a-5c2e69db4a5c/volumes" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.773357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.922072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.952788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953194 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953271 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.970779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.971371 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g" (OuterVolumeSpecName: "kube-api-access-65h5g") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "kube-api-access-65h5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.013687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.043572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data" (OuterVolumeSpecName: "config-data") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057852 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058551 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058579 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058593 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058608 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.062232 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn" (OuterVolumeSpecName: "kube-api-access-qh2gn") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "kube-api-access-qh2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.062554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.097553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.139530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data" (OuterVolumeSpecName: "config-data") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161248 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161282 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161292 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161301 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerDied","Data":"d5d149d08742d33f66584c180e4bcc703eac2ef7429ac5a118311ff5b7b3d10b"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247312 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247327 4739 scope.go:117] "RemoveContainer" containerID="311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249117 4739 generic.go:334] "Generic (PLEG): container finished" podID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" exitCode=1 Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249964 4739 scope.go:117] "RemoveContainer" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.255531 4739 generic.go:334] "Generic (PLEG): container finished" podID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" exitCode=1 Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.255609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.256280 4739 scope.go:117] "RemoveContainer" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.264179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerDied","Data":"d6095d355750dd1b26e4a1ff757c91ef13850ef0b2d9531d6b4d28aeda570b18"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.264304 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.274554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.285699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.290903 4739 scope.go:117] "RemoveContainer" containerID="cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.338190 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.348548 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.358197 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.380941 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.387602 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.288943 4739 generic.go:334] "Generic (PLEG): container finished" podID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" exitCode=1 Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.289303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.289339 4739 scope.go:117] "RemoveContainer" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.290081 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:34 crc kubenswrapper[4739]: E0218 14:22:34.290550 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.297464 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.298752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.311089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.489761 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" path="/var/lib/kubelet/pods/3a6654bc-87e3-4bd4-9f38-08f64907ea4c/volumes" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.492769 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" path="/var/lib/kubelet/pods/93ebc0dc-ca08-4c3e-bf54-d6530d56c322/volumes" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.324760 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:35 crc kubenswrapper[4739]: E0218 14:22:35.325511 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326582 4739 generic.go:334] "Generic (PLEG): container finished" podID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" exitCode=1 Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9"} Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326732 4739 scope.go:117] "RemoveContainer" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326927 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:35 crc kubenswrapper[4739]: E0218 14:22:35.327153 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.342023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc"} Feb 18 14:22:36 crc kubenswrapper[4739]: I0218 14:22:36.367973 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:36 crc kubenswrapper[4739]: E0218 14:22:36.368418 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:36 crc kubenswrapper[4739]: I0218 14:22:36.645691 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f"} Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382317 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382273 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" containerID="cri-o://be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382430 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" containerID="cri-o://bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382596 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" containerID="cri-o://c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" containerID="cri-o://fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.427203 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.176661707 podStartE2EDuration="8.427180394s" podCreationTimestamp="2026-02-18 14:22:29 +0000 UTC" firstStartedPulling="2026-02-18 14:22:31.431869432 +0000 UTC m=+1383.927590354" lastFinishedPulling="2026-02-18 14:22:36.682388119 +0000 UTC m=+1389.178109041" observedRunningTime="2026-02-18 14:22:37.412211357 +0000 UTC m=+1389.907932289" watchObservedRunningTime="2026-02-18 14:22:37.427180394 +0000 UTC m=+1389.922901316" Feb 18 14:22:37 crc kubenswrapper[4739]: E0218 14:22:37.776365 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9138cdd_fae9_4563_8fea_43df3f704da4.slice/crio-conmon-bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.285961 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.286286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.287258 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:38 crc kubenswrapper[4739]: E0218 14:22:38.287724 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.348946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.349932 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:38 crc kubenswrapper[4739]: E0218 14:22:38.350233 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407124 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" exitCode=0 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f"} Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc"} Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407164 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" exitCode=2 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407242 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" exitCode=0 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407268 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0"} Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.346478 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.438285 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.855552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.946548 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.979086 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106183 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106366 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106475 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.126491 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.130758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84" (OuterVolumeSpecName: "kube-api-access-n4p84") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "kube-api-access-n4p84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.165697 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.189866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data" (OuterVolumeSpecName: "config-data") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209209 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209248 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209262 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209273 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.458582 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1"} Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471274 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.473405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"8eee24ec7f9accbd61a3e88a575fabd5b156dc4338ac144c30c542bf27a434fc"} Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.473502 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515724 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515955 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.516726 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.524209 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.527058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn" (OuterVolumeSpecName: "kube-api-access-7hlgn") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "kube-api-access-7hlgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.538691 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.542487 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.569018 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.583195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619764 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619806 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619820 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.620924 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data" (OuterVolumeSpecName: "config-data") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.722526 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.828078 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.847626 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.427535 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" path="/var/lib/kubelet/pods/54b11ed9-a528-468d-ad77-89ee83d042c5/volumes" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.429025 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" path="/var/lib/kubelet/pods/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3/volumes" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.490394 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.493086 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.493110 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.493926 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.493980 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494025 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494058 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494074 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494087 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494097 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494117 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494123 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494145 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494153 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494199 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494207 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494221 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="init" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494226 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="init" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495046 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495068 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495082 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495099 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495110 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495122 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495130 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495138 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.496184 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.540007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.542081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.542243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.614983 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.615649 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.615671 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.615939 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.616795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.640003 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.645108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.684798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.714335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.716054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.742525 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.744924 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746280 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.747795 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.753288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.755111 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.776263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.781716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.822223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849392 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.850649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.894380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.910135 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.911900 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.918838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.944586 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.954645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.965912 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.981201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.055892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.056359 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.057174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.146057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.159204 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.160433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.346162 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.348683 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.381842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.409578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.468165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.468295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.561226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.571193 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.571287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.575047 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.610239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.666735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.292462 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:46 crc kubenswrapper[4739]: W0218 14:22:46.329796 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod290b50b0_4283_4a40_b694_4a5f18b39b1a.slice/crio-097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5 WatchSource:0}: Error finding container 097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5: Status 404 returned error can't find the container with id 097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5 Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.559657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerStarted","Data":"097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.577761 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" exitCode=0 Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.577842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.582067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6699e575-f077-433c-a257-f65f329d6e69","Type":"ContainerStarted","Data":"cf3227a54466fa6eb6ab918b31a85c454ca4079bc1e21308ecac7a95552305d2"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.608932 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.016979257 podStartE2EDuration="38.608910525s" podCreationTimestamp="2026-02-18 14:22:08 +0000 UTC" firstStartedPulling="2026-02-18 14:22:09.462723852 +0000 UTC m=+1361.958444764" lastFinishedPulling="2026-02-18 14:22:45.05465511 +0000 UTC m=+1397.550376032" observedRunningTime="2026-02-18 14:22:46.597654282 +0000 UTC m=+1399.093375214" watchObservedRunningTime="2026-02-18 14:22:46.608910525 +0000 UTC m=+1399.104631457" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.035646 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.049837 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.059020 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.081801 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.100519 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.175376 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259533 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.261392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.264855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.275824 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts" (OuterVolumeSpecName: "scripts") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.299643 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt" (OuterVolumeSpecName: "kube-api-access-6bldt") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "kube-api-access-6bldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364261 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364659 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364877 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364894 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.454668 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.467583 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.500363 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.570210 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595427 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"f49f9c840da6b7b1c2c162adfd6ff58755e7165a8c2d9b23a26c34f3222084fc"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595698 4739 scope.go:117] "RemoveContainer" containerID="bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595779 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.599223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerStarted","Data":"8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.602144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerStarted","Data":"a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.618900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerStarted","Data":"9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.628313 4739 generic.go:334] "Generic (PLEG): container finished" podID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerID="164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f" exitCode=0 Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.628650 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerDied","Data":"164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.642704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerStarted","Data":"cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.646835 4739 scope.go:117] "RemoveContainer" containerID="fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.681736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerStarted","Data":"8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.701016 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data" (OuterVolumeSpecName: "config-data") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.725621 4739 scope.go:117] "RemoveContainer" containerID="c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.775797 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.785539 4739 scope.go:117] "RemoveContainer" containerID="be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.966045 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.980229 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.007811 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008333 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008350 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008368 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008374 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008399 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008407 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008434 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008454 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008644 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008663 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008682 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.012870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.015766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.015887 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.027995 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110092 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.126550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.194994 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.195233 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" containerID="cri-o://82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" gracePeriod=60 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212241 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.213301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.214250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.223781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.232714 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.236591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.236994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.246746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.372133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.436220 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" path="/var/lib/kubelet/pods/f9138cdd-fae9-4563-8fea-43df3f704da4/volumes" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.728814 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f229688-5021-4d28-9109-98071744a102" containerID="cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.729214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerDied","Data":"cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.763278 4739 generic.go:334] "Generic (PLEG): container finished" podID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerID="f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.764855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerDied","Data":"f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.787416 4739 generic.go:334] "Generic (PLEG): container finished" podID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerID="d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.787557 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerDied","Data":"d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.817439 4739 generic.go:334] "Generic (PLEG): container finished" podID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerID="c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.817736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerDied","Data":"c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.827884 4739 generic.go:334] "Generic (PLEG): container finished" podID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerID="f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.828232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerDied","Data":"f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.964879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.369302 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.557916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"290b50b0-4283-4a40-b694-4a5f18b39b1a\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.558236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"290b50b0-4283-4a40-b694-4a5f18b39b1a\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.559003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "290b50b0-4283-4a40-b694-4a5f18b39b1a" (UID: "290b50b0-4283-4a40-b694-4a5f18b39b1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.563666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99" (OuterVolumeSpecName: "kube-api-access-2xh99") pod "290b50b0-4283-4a40-b694-4a5f18b39b1a" (UID: "290b50b0-4283-4a40-b694-4a5f18b39b1a"). InnerVolumeSpecName "kube-api-access-2xh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.675169 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.675431 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.841518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.841563 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.846780 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.847558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerDied","Data":"097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.847605 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.527120 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.721608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"1f229688-5021-4d28-9109-98071744a102\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.721801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"1f229688-5021-4d28-9109-98071744a102\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.723058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f229688-5021-4d28-9109-98071744a102" (UID: "1f229688-5021-4d28-9109-98071744a102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.744751 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n" (OuterVolumeSpecName: "kube-api-access-27v8n") pod "1f229688-5021-4d28-9109-98071744a102" (UID: "1f229688-5021-4d28-9109-98071744a102"). InnerVolumeSpecName "kube-api-access-27v8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.773886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.778612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.789794 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.830544 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.830632 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.845858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerDied","Data":"8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894215 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894300 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896485 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerDied","Data":"8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896526 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896579 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.899637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerDied","Data":"a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912570 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912637 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerDied","Data":"9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915149 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915190 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerDied","Data":"cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917170 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917208 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.936866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"f689babc-92f9-4e45-8fb3-40722e18cd10\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.936979 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"f689babc-92f9-4e45-8fb3-40722e18cd10\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937093 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"c33399d1-a28e-4e19-aba8-a218018e5e8b\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"429115da-eb66-4dc9-9210-86cd0525a6cf\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937314 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f689babc-92f9-4e45-8fb3-40722e18cd10" (UID: "f689babc-92f9-4e45-8fb3-40722e18cd10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"c33399d1-a28e-4e19-aba8-a218018e5e8b\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937395 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"429115da-eb66-4dc9-9210-86cd0525a6cf\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.938502 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.938518 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" (UID: "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.939016 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c33399d1-a28e-4e19-aba8-a218018e5e8b" (UID: "c33399d1-a28e-4e19-aba8-a218018e5e8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.939080 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "429115da-eb66-4dc9-9210-86cd0525a6cf" (UID: "429115da-eb66-4dc9-9210-86cd0525a6cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.945775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh" (OuterVolumeSpecName: "kube-api-access-qdnwh") pod "429115da-eb66-4dc9-9210-86cd0525a6cf" (UID: "429115da-eb66-4dc9-9210-86cd0525a6cf"). InnerVolumeSpecName "kube-api-access-qdnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.945828 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq" (OuterVolumeSpecName: "kube-api-access-4wsrq") pod "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" (UID: "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd"). InnerVolumeSpecName "kube-api-access-4wsrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.947582 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5" (OuterVolumeSpecName: "kube-api-access-g6gb5") pod "c33399d1-a28e-4e19-aba8-a218018e5e8b" (UID: "c33399d1-a28e-4e19-aba8-a218018e5e8b"). InnerVolumeSpecName "kube-api-access-g6gb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.949738 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj" (OuterVolumeSpecName: "kube-api-access-5n9gj") pod "f689babc-92f9-4e45-8fb3-40722e18cd10" (UID: "f689babc-92f9-4e45-8fb3-40722e18cd10"). InnerVolumeSpecName "kube-api-access-5n9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040494 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040529 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040544 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040554 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040564 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040573 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040581 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.131982 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.133129 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.134182 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.134226 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.953694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c"} Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.977406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd"} Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.977976 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.999882 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.675379856 podStartE2EDuration="6.999856295s" podCreationTimestamp="2026-02-18 14:22:47 +0000 UTC" firstStartedPulling="2026-02-18 14:22:48.97825544 +0000 UTC m=+1401.473976362" lastFinishedPulling="2026-02-18 14:22:53.302731879 +0000 UTC m=+1405.798452801" observedRunningTime="2026-02-18 14:22:53.995167097 +0000 UTC m=+1406.490888029" watchObservedRunningTime="2026-02-18 14:22:53.999856295 +0000 UTC m=+1406.495577217" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182404 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182931 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182953 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182961 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182985 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182994 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183013 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183021 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183043 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183075 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183083 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183327 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183344 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183359 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183376 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183395 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183409 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.184374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.192936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.201731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r74ht" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.201913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.222200 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.339932 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340881 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.443697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.450050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.452812 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.466168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.466172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.506946 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:56 crc kubenswrapper[4739]: I0218 14:22:56.358740 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:57 crc kubenswrapper[4739]: I0218 14:22:57.075627 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerStarted","Data":"40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7"} Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.139689 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.143072 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.144759 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.144802 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:04 crc kubenswrapper[4739]: I0218 14:23:04.206797 4739 generic.go:334] "Generic (PLEG): container finished" podID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" exitCode=0 Feb 18 14:23:04 crc kubenswrapper[4739]: I0218 14:23:04.207250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerDied","Data":"82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990"} Feb 18 14:23:06 crc kubenswrapper[4739]: I0218 14:23:06.891019 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043429 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043641 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.057890 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.057941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4" (OuterVolumeSpecName: "kube-api-access-lczr4") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "kube-api-access-lczr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.090491 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.125714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data" (OuterVolumeSpecName: "config-data") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148201 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148272 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148288 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148306 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerDied","Data":"93bc0594ac2cecd77e6e563c92943e92190081b3c713021d84dd28fc365b4b5c"} Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251811 4739 scope.go:117] "RemoveContainer" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251675 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.309575 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.326973 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.264571 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerStarted","Data":"7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91"} Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.283890 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" podStartSLOduration=2.144997534 podStartE2EDuration="13.283868238s" podCreationTimestamp="2026-02-18 14:22:55 +0000 UTC" firstStartedPulling="2026-02-18 14:22:56.389340967 +0000 UTC m=+1408.885061889" lastFinishedPulling="2026-02-18 14:23:07.528211671 +0000 UTC m=+1420.023932593" observedRunningTime="2026-02-18 14:23:08.278565815 +0000 UTC m=+1420.774286737" watchObservedRunningTime="2026-02-18 14:23:08.283868238 +0000 UTC m=+1420.779589160" Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.425439 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" path="/var/lib/kubelet/pods/48f5a3e4-7bee-4689-b7b8-5869536bebb6/volumes" Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.881801 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.882592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" containerID="cri-o://7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" gracePeriod=30 Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.882722 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" containerID="cri-o://55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" gracePeriod=30 Feb 18 14:23:10 crc kubenswrapper[4739]: I0218 14:23:10.294978 4739 generic.go:334] "Generic (PLEG): container finished" podID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" exitCode=143 Feb 18 14:23:10 crc kubenswrapper[4739]: I0218 14:23:10.295040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.017228 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018163 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" containerID="cri-o://3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018300 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" containerID="cri-o://0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" containerID="cri-o://4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" containerID="cri-o://68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.026768 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.230:3000/\": EOF" Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330789 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" exitCode=0 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330823 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" exitCode=2 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.940727 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106790 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106964 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.108643 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.108791 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs" (OuterVolumeSpecName: "logs") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.114296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts" (OuterVolumeSpecName: "scripts") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.119958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z" (OuterVolumeSpecName: "kube-api-access-92f7z") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "kube-api-access-92f7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.173092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (OuterVolumeSpecName: "glance") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.177974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209678 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209707 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209720 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209729 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209739 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209766 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.211658 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.236797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data" (OuterVolumeSpecName: "config-data") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.265489 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.265681 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15") on node "crc" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312109 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312144 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312155 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345232 4739 generic.go:334] "Generic (PLEG): container finished" podID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" exitCode=0 Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345305 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345386 4739 scope.go:117] "RemoveContainer" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.349273 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" exitCode=0 Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.349393 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.387495 4739 scope.go:117] "RemoveContainer" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.392667 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.406967 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.433563 4739 scope.go:117] "RemoveContainer" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.434870 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" path="/var/lib/kubelet/pods/3677acc3-fd05-4d33-ac6c-aa420ecce125/volumes" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.435917 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436515 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436608 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436698 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436753 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436819 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436865 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437290 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.438988 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": container with ID starting with 55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59 not found: ID does not exist" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.439043 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} err="failed to get container status \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": rpc error: code = NotFound desc = could not find container \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": container with ID starting with 55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59 not found: ID does not exist" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.439081 4739 scope.go:117] "RemoveContainer" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.440826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.442164 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": container with ID starting with 7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768 not found: ID does not exist" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.442310 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} err="failed to get container status \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": rpc error: code = NotFound desc = could not find container \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": container with ID starting with 7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768 not found: ID does not exist" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.443299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.444544 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.445869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621075 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.723624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.724343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726160 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.731732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.737545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.742420 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.743609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.769469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.880668 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.880719 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.936870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:15 crc kubenswrapper[4739]: I0218 14:23:15.062684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:15 crc kubenswrapper[4739]: I0218 14:23:15.859135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.395468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"d4df0f8d4267d4f85121acc8729cb17a2c6b7020a08109840e7ed6bb94cca088"} Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.916394 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.918516 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" containerID="cri-o://2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" gracePeriod=30 Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.918776 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" containerID="cri-o://c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" gracePeriod=30 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.414245 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" exitCode=0 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.414927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.415024 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.415083 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.419091 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerID="2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" exitCode=143 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.419147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.456155 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601816 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601985 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602188 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602364 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.603417 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.603450 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.610922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx" (OuterVolumeSpecName: "kube-api-access-zcscx") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "kube-api-access-zcscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.614970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts" (OuterVolumeSpecName: "scripts") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.651621 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705450 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705491 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705504 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.763433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.808225 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.819966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data" (OuterVolumeSpecName: "config-data") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.911269 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.473566 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.473675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"0f0231204125bc2b20fc4b72eed91b2aaa5b163d221cc452723692c9b26fe987"} Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.531071 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.543497 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.572432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573078 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573093 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573144 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573151 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573161 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573168 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573189 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573479 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573513 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573527 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573540 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.576372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.579260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.579510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.586692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760509 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.761063 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.864211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.870622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.871564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.872839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.876213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.887412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.921237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.485569 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.491917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"e308311384c510554abf8ba314d9b8cc54b782be943c166fcd6bb31d51a1056b"} Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.520221 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.520174906 podStartE2EDuration="5.520174906s" podCreationTimestamp="2026-02-18 14:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:19.515026497 +0000 UTC m=+1432.010747419" watchObservedRunningTime="2026-02-18 14:23:19.520174906 +0000 UTC m=+1432.015895838" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.433729 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" path="/var/lib/kubelet/pods/043c7e92-488e-4581-b683-a50c6f3e4262/volumes" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.510987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"15625072c38b1bf8ecb9484d34cda1baf8e1ed5006b99a1e19bebfe35acb6921"} Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.515186 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerID="c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" exitCode=0 Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.515617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96"} Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.844335 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.934968 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.935437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.935493 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937692 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937856 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.938117 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.939254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs" (OuterVolumeSpecName: "logs") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.942869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.943041 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.944116 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.944134 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.947197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts" (OuterVolumeSpecName: "scripts") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.958677 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27" (OuterVolumeSpecName: "kube-api-access-6fd27") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "kube-api-access-6fd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.971572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (OuterVolumeSpecName: "glance") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.029743 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047514 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047581 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" " Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047596 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047607 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.063847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data" (OuterVolumeSpecName: "config-data") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.097556 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.097730 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b") on node "crc" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.111570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149830 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149884 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149900 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14"} Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529352 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529912 4739 scope.go:117] "RemoveContainer" containerID="c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.531970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24"} Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.578057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.591033 4739 scope.go:117] "RemoveContainer" containerID="2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.604114 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.627965 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: E0218 14:23:21.628627 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628643 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: E0218 14:23:21.628683 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628692 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628983 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.629001 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.630566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.644244 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.650005 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.650792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780533 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780722 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780779 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883276 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.884212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.890097 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.891554 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.903202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.903811 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.904369 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.904436 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.906832 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.983401 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.008852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.263119 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.430990 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" path="/var/lib/kubelet/pods/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad/volumes" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.547334 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91"} Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.282988 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.578385 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245"} Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.585911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"c74a46caf197dedd9c2d715d0c17a4b9a9979871dc62b59f2ca4c7377645f255"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.600052 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"dc445577715fa089d238a4ffcca852ad90b61274bed53ae3ff7724fb515dd60a"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.600107 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"68090b2b8e5037afa342f3250c6bbea61b080100f9dc5b387dc68386a59fc3a1"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.642770 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.642744086 podStartE2EDuration="3.642744086s" podCreationTimestamp="2026-02-18 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:24.637176717 +0000 UTC m=+1437.132897649" watchObservedRunningTime="2026-02-18 14:23:24.642744086 +0000 UTC m=+1437.138465018" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.063719 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.063782 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.102907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.115285 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.621904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.627779 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.288486 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.635990 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2"} Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.636285 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.672265 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.836954701 podStartE2EDuration="8.672248838s" podCreationTimestamp="2026-02-18 14:23:18 +0000 UTC" firstStartedPulling="2026-02-18 14:23:19.48289257 +0000 UTC m=+1431.978613502" lastFinishedPulling="2026-02-18 14:23:25.318186717 +0000 UTC m=+1437.813907639" observedRunningTime="2026-02-18 14:23:26.667759876 +0000 UTC m=+1439.163480808" watchObservedRunningTime="2026-02-18 14:23:26.672248838 +0000 UTC m=+1439.167969760" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647189 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647231 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647345 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" containerID="cri-o://7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647420 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" containerID="cri-o://89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647503 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" containerID="cri-o://e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647551 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" containerID="cri-o://da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" gracePeriod=30 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.070027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.082726 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660212 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" exitCode=0 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660510 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" exitCode=2 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660521 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" exitCode=0 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2"} Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245"} Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91"} Feb 18 14:23:29 crc kubenswrapper[4739]: I0218 14:23:29.373583 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:23:29 crc kubenswrapper[4739]: I0218 14:23:29.373661 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:23:31 crc kubenswrapper[4739]: I0218 14:23:31.694280 4739 generic.go:334] "Generic (PLEG): container finished" podID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerID="7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91" exitCode=0 Feb 18 14:23:31 crc kubenswrapper[4739]: I0218 14:23:31.694667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerDied","Data":"7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91"} Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.264271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.264336 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.304747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.326578 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.708573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.708823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.168250 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308228 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.318955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668" (OuterVolumeSpecName: "kube-api-access-fq668") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "kube-api-access-fq668". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.320560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts" (OuterVolumeSpecName: "scripts") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.342695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data" (OuterVolumeSpecName: "config-data") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.343951 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.411959 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412165 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412175 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412185 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.720919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.720910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerDied","Data":"40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7"} Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.721658 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.734799 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" exitCode=0 Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.736495 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24"} Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.854365 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:33 crc kubenswrapper[4739]: E0218 14:23:33.855380 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.855573 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.855998 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.856939 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.861167 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.861191 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r74ht" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.870190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.977221 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.033850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.034166 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.034227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136353 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136918 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137265 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137428 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.138073 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.138094 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.142207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.143137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts" (OuterVolumeSpecName: "scripts") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.143190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp" (OuterVolumeSpecName: "kube-api-access-s8hkp") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "kube-api-access-s8hkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.156315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.167099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.177698 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.199369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241276 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241318 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241334 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.262530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.306351 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data" (OuterVolumeSpecName: "config-data") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.356033 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.356070 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.684802 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: W0218 14:23:34.685749 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc35bd35d_d228_4223_a207_ea164d0c6b23.slice/crio-1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3 WatchSource:0}: Error finding container 1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3: Status 404 returned error can't find the container with id 1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3 Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.748978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"15625072c38b1bf8ecb9484d34cda1baf8e1ed5006b99a1e19bebfe35acb6921"} Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.749294 4739 scope.go:117] "RemoveContainer" containerID="89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.749002 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.752635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c35bd35d-d228-4223-a207-ea164d0c6b23","Type":"ContainerStarted","Data":"1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3"} Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.785539 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.789854 4739 scope.go:117] "RemoveContainer" containerID="e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.806925 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.820801 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821422 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821474 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821482 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821515 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821523 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821803 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821838 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821858 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821885 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.824429 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.829784 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.830074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.830282 4739 scope.go:117] "RemoveContainer" containerID="da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.833998 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.886709 4739 scope.go:117] "RemoveContainer" containerID="7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971334 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971815 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972137 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.068269 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.068389 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.074895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.074951 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.076021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.076321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.083796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.098043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.133079 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.159134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.719145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.767824 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c35bd35d-d228-4223-a207-ea164d0c6b23","Type":"ContainerStarted","Data":"9f60772161d540c8a5dbc2f3da2dba7aa06904c14b59606d6384b3d8ee20a2c1"} Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.769604 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.771027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"633d577ca0d7c26b5d575a55a4d77d6216b341dedf226f7656b21d39f19c64e4"} Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.803342 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.803315046 podStartE2EDuration="2.803315046s" podCreationTimestamp="2026-02-18 14:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:35.788756721 +0000 UTC m=+1448.284477663" watchObservedRunningTime="2026-02-18 14:23:35.803315046 +0000 UTC m=+1448.299035968" Feb 18 14:23:36 crc kubenswrapper[4739]: I0218 14:23:36.429302 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" path="/var/lib/kubelet/pods/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40/volumes" Feb 18 14:23:36 crc kubenswrapper[4739]: I0218 14:23:36.782241 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294"} Feb 18 14:23:37 crc kubenswrapper[4739]: I0218 14:23:37.796544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956"} Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.682720 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.684972 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.699176 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.724481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.730016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.757620 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.766178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.766286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.781216 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.814821 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271"} Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.868824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.868989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.888826 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.971763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.973019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.973892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.990095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.035600 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.046773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.294580 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.709805 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:39 crc kubenswrapper[4739]: W0218 14:23:39.716902 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7351c0c9_c9c1_474c_a9cc_cde24bd45dfa.slice/crio-37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80 WatchSource:0}: Error finding container 37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80: Status 404 returned error can't find the container with id 37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80 Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.826501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerStarted","Data":"37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80"} Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.889576 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.929719 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.932266 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.935913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.936163 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.945736 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.120495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.120998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.121136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.121524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.223914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.253156 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.260251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.264218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.376780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.407901 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.409704 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.486607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.525524 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.542343 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.635745 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.638551 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.666152 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.666702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.681685 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.733996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.752897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.752994 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.753095 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.818864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.836096 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.920119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.920364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.928231 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.937355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerStarted","Data":"3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120"} Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.944635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.950153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.971140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.973545 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.989849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995603 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f"} Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995759 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.022016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.093458 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133171 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133553 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.157558 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.160888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.166309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.215507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.240161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.242892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.242941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.243149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.243186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.258593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.261964 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.291245408 podStartE2EDuration="7.261939375s" podCreationTimestamp="2026-02-18 14:23:34 +0000 UTC" firstStartedPulling="2026-02-18 14:23:35.724671321 +0000 UTC m=+1448.220392243" lastFinishedPulling="2026-02-18 14:23:39.695365288 +0000 UTC m=+1452.191086210" observedRunningTime="2026-02-18 14:23:41.049993363 +0000 UTC m=+1453.545714295" watchObservedRunningTime="2026-02-18 14:23:41.261939375 +0000 UTC m=+1453.757660297" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.284698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.318143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.351615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364305 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364767 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364790 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364925 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.365979 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.366607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.373262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.380836 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.381947 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.394397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.559073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.571455 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.579579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.579875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.580076 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.580206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581127 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.582137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.582424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.584151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.608395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.745870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.834512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.858871 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.060611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerStarted","Data":"1e4203ffbb72f10f3e23eeb7b58aca4644efc86b96e25b7947b3e87de9a09564"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.069827 4739 generic.go:334] "Generic (PLEG): container finished" podID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerID="633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9" exitCode=0 Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.069929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerDied","Data":"633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.078805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerStarted","Data":"a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.091925 4739 generic.go:334] "Generic (PLEG): container finished" podID="4445c84e-2108-44e0-a46e-673fe0858df3" containerID="67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807" exitCode=0 Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.092260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerDied","Data":"67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.223178 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.631595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.697813 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.011342 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.027509 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.031820 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.041466 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.080206 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.123162 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:43 crc kubenswrapper[4739]: W0218 14:23:43.142676 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3e9cc3_348e_4556_89a2_ea261dd47147.slice/crio-3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf WatchSource:0}: Error finding container 3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf: Status 404 returned error can't find the container with id 3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.143011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerStarted","Data":"5e425dc81372bc58ea5a732a114720e008b75f79e58f406fbae181589aeba1b6"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.157704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerStarted","Data":"c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.172621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"64030588e2930d3d06f331c679500514142af233ae50cdca79cac3e5508cd8e1"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179221 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179352 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.189377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"6e2df7ee9b43e8c8d150f1e10f74fbb1b12aa869992bbc9978302cfef895fb90"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.189978 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ldxnr" podStartSLOduration=4.189959 podStartE2EDuration="4.189959s" podCreationTimestamp="2026-02-18 14:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:43.189550469 +0000 UTC m=+1455.685271401" watchObservedRunningTime="2026-02-18 14:23:43.189959 +0000 UTC m=+1455.685679922" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282234 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.291710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.295012 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.297915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.327108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.422249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.807118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.870635 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:43 crc kubenswrapper[4739]: E0218 14:23:43.871606 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.871696 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.872072 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.876286 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.922993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.923690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.923845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" (UID: "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.924508 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.927865 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.931224 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8" (OuterVolumeSpecName: "kube-api-access-rmth8") pod "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" (UID: "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa"). InnerVolumeSpecName "kube-api-access-rmth8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.028038 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.131088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.132856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.159476 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.188207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.232821 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.233237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"4445c84e-2108-44e0-a46e-673fe0858df3\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.233593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"4445c84e-2108-44e0-a46e-673fe0858df3\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.236248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4445c84e-2108-44e0-a46e-673fe0858df3" (UID: "4445c84e-2108-44e0-a46e-673fe0858df3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.237949 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerDied","Data":"3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253120 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253174 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.255170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m" (OuterVolumeSpecName: "kube-api-access-lrd2m") pod "4445c84e-2108-44e0-a46e-673fe0858df3" (UID: "4445c84e-2108-44e0-a46e-673fe0858df3"). InnerVolumeSpecName "kube-api-access-lrd2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269261 4739 generic.go:334] "Generic (PLEG): container finished" podID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerID="21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8" exitCode=0 Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerStarted","Data":"3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.307998 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.308304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerDied","Data":"37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.308363 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.351504 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.445544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.963824 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.012744 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.057260 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:45 crc kubenswrapper[4739]: W0218 14:23:45.082371 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bbaed51_382b_4b1b_8b3f_95521f415472.slice/crio-8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052 WatchSource:0}: Error finding container 8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052: Status 404 returned error can't find the container with id 8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052 Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.342782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerStarted","Data":"f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.343050 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerStarted","Data":"a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.413059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.415184 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" podStartSLOduration=3.415165076 podStartE2EDuration="3.415165076s" podCreationTimestamp="2026-02-18 14:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:45.402160469 +0000 UTC m=+1457.897881391" watchObservedRunningTime="2026-02-18 14:23:45.415165076 +0000 UTC m=+1457.910885998" Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.439808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerStarted","Data":"94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.440771 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.477674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" podStartSLOduration=4.477647635 podStartE2EDuration="4.477647635s" podCreationTimestamp="2026-02-18 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:45.461820807 +0000 UTC m=+1457.957541729" watchObservedRunningTime="2026-02-18 14:23:45.477647635 +0000 UTC m=+1457.973368557" Feb 18 14:23:46 crc kubenswrapper[4739]: I0218 14:23:46.465420 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df" exitCode=0 Feb 18 14:23:46 crc kubenswrapper[4739]: I0218 14:23:46.465545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.505150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerStarted","Data":"270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.538738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.547098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerStarted","Data":"4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.547230 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" gracePeriod=30 Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.551954 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.417516641 podStartE2EDuration="8.551934942s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:41.847703754 +0000 UTC m=+1454.343424676" lastFinishedPulling="2026-02-18 14:23:47.982122055 +0000 UTC m=+1460.477842977" observedRunningTime="2026-02-18 14:23:48.524938014 +0000 UTC m=+1461.020658936" watchObservedRunningTime="2026-02-18 14:23:48.551934942 +0000 UTC m=+1461.047655864" Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.578162 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.766425673 podStartE2EDuration="8.57814112s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.231540713 +0000 UTC m=+1454.727261635" lastFinishedPulling="2026-02-18 14:23:48.04325616 +0000 UTC m=+1460.538977082" observedRunningTime="2026-02-18 14:23:48.567652087 +0000 UTC m=+1461.063373039" watchObservedRunningTime="2026-02-18 14:23:48.57814112 +0000 UTC m=+1461.073862042" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.348304 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:49 crc kubenswrapper[4739]: E0218 14:23:49.351055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.351084 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.351409 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.353335 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.356834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.357029 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.357825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.358126 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.361677 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.517808 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.517963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.518111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.518146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.577182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.577231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579218 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" containerID="cri-o://c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" gracePeriod=30 Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579523 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" containerID="cri-o://cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" gracePeriod=30 Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.583912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.614638 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.228886767 podStartE2EDuration="9.614619207s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.655091319 +0000 UTC m=+1455.150812241" lastFinishedPulling="2026-02-18 14:23:48.040823739 +0000 UTC m=+1460.536544681" observedRunningTime="2026-02-18 14:23:49.593998869 +0000 UTC m=+1462.089719811" watchObservedRunningTime="2026-02-18 14:23:49.614619207 +0000 UTC m=+1462.110340129" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.620252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.623965 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.273812395 podStartE2EDuration="9.623943721s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.632486521 +0000 UTC m=+1455.128207443" lastFinishedPulling="2026-02-18 14:23:47.982617857 +0000 UTC m=+1460.478338769" observedRunningTime="2026-02-18 14:23:49.622275749 +0000 UTC m=+1462.117996671" watchObservedRunningTime="2026-02-18 14:23:49.623943721 +0000 UTC m=+1462.119664663" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.636297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.638008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.638354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.645392 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.674328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.436911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607271 4739 generic.go:334] "Generic (PLEG): container finished" podID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerID="cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" exitCode=0 Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607530 4739 generic.go:334] "Generic (PLEG): container finished" podID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerID="c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" exitCode=143 Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.609940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerStarted","Data":"36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.837362 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.837547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.876354 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.930253 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.992652 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060289 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060823 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.061009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.061225 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs" (OuterVolumeSpecName: "logs") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.063184 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.084375 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2" (OuterVolumeSpecName: "kube-api-access-tztg2") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "kube-api-access-tztg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.155941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.165175 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.165209 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.169043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data" (OuterVolumeSpecName: "config-data") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.272362 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.561813 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.561877 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.632835 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"6e2df7ee9b43e8c8d150f1e10f74fbb1b12aa869992bbc9978302cfef895fb90"} Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.632900 4739 scope.go:117] "RemoveContainer" containerID="cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.633074 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.689483 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.703861 4739 scope.go:117] "RemoveContainer" containerID="c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.724062 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.747562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.750251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.753585 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: E0218 14:23:51.754234 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754267 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: E0218 14:23:51.754315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754324 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754667 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.756965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.760115 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.760225 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.775377 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.866982 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.880347 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" containerID="cri-o://38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" gracePeriod=10 Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.995991 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.996134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.000365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.014343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.017976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.087199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.432135 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" path="/var/lib/kubelet/pods/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1/volumes" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.644733 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.645390 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.670554 4739 generic.go:334] "Generic (PLEG): container finished" podID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerID="38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" exitCode=0 Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.670619 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180"} Feb 18 14:23:52 crc kubenswrapper[4739]: W0218 14:23:52.778343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4c65b2_a3f9_446e_9807_bb2290d04b87.slice/crio-fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7 WatchSource:0}: Error finding container fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7: Status 404 returned error can't find the container with id fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7 Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.780117 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.698394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1"} Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.698991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7"} Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.700738 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f44227f-28d1-4aaf-9133-c4560b893022" containerID="c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355" exitCode=0 Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.700783 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerDied","Data":"c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.450897 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.471147 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557365 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557724 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557848 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.566105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5" (OuterVolumeSpecName: "kube-api-access-zg4n5") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "kube-api-access-zg4n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.566810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts" (OuterVolumeSpecName: "scripts") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.568671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx" (OuterVolumeSpecName: "kube-api-access-mwvtx") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "kube-api-access-mwvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.619267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.635886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data" (OuterVolumeSpecName: "config-data") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.654939 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.656248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.659041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config" (OuterVolumeSpecName: "config") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661385 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661416 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661429 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661438 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661463 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661472 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661480 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661488 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.664764 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.682379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerDied","Data":"a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752589 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752605 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.755989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"6ad816951b3fbde1a7196efd13d5a85b80b684bb992e88915048b9d53fd1030f"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.756048 4739 scope.go:117] "RemoveContainer" containerID="38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.756085 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.764324 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.764580 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.798324 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.798798 4739 scope.go:117] "RemoveContainer" containerID="8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.811118 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.428411 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" path="/var/lib/kubelet/pods/496019f4-ba1f-40a6-9cff-bf7bd8dfee51/volumes" Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.584807 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.585055 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" containerID="cri-o://270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606198 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606503 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" containerID="cri-o://149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606665 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" containerID="cri-o://7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.642648 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.770326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce"} Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.773392 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerStarted","Data":"52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631"} Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.796352 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.796327445 podStartE2EDuration="7.796327445s" podCreationTimestamp="2026-02-18 14:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:58.78973199 +0000 UTC m=+1471.285452922" watchObservedRunningTime="2026-02-18 14:23:58.796327445 +0000 UTC m=+1471.292048367" Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.823308 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-xg8g2" podStartSLOduration=3.052190496 podStartE2EDuration="9.823292061s" podCreationTimestamp="2026-02-18 14:23:49 +0000 UTC" firstStartedPulling="2026-02-18 14:23:50.439662945 +0000 UTC m=+1462.935383867" lastFinishedPulling="2026-02-18 14:23:57.21076451 +0000 UTC m=+1469.706485432" observedRunningTime="2026-02-18 14:23:58.82087297 +0000 UTC m=+1471.316593892" watchObservedRunningTime="2026-02-18 14:23:58.823292061 +0000 UTC m=+1471.319012983" Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.373128 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.373643 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.788380 4739 generic.go:334] "Generic (PLEG): container finished" podID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerID="270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" exitCode=0 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.788765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerDied","Data":"270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f"} Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791328 4739 generic.go:334] "Generic (PLEG): container finished" podID="69e98338-825d-4f76-833c-2e1ea807d942" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" exitCode=143 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791391 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791772 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" containerID="cri-o://7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" gracePeriod=30 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791837 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" containerID="cri-o://82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" gracePeriod=30 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.000308 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126463 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.131619 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw" (OuterVolumeSpecName: "kube-api-access-nnmrw") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "kube-api-access-nnmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.159559 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.160706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data" (OuterVolumeSpecName: "config-data") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229539 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229785 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229795 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.369841 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4c65b2_a3f9_446e_9807_bb2290d04b87.slice/crio-conmon-82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806026 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerID="82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" exitCode=0 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806335 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerID="7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" exitCode=143 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerDied","Data":"1e4203ffbb72f10f3e23eeb7b58aca4644efc86b96e25b7947b3e87de9a09564"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808181 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808236 4739 scope.go:117] "RemoveContainer" containerID="270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.839521 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.875501 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.896850 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897613 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897639 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="init" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897649 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="init" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897694 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897704 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897719 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897727 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898077 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898108 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898136 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.899326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.907756 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.915882 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.002207 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157576 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157879 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158429 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs" (OuterVolumeSpecName: "logs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158827 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.168861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.169140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs" (OuterVolumeSpecName: "kube-api-access-xvtvs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "kube-api-access-xvtvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.182142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.202691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.264722 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.266609 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data" (OuterVolumeSpecName: "config-data") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.318240 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.342249 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: i/o timeout" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.360605 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.375734 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.375765 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.381846 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.478802 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7"} Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832638 4739 scope.go:117] "RemoveContainer" containerID="82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832634 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.842908 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158" exitCode=0 Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.842945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158"} Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.907715 4739 scope.go:117] "RemoveContainer" containerID="7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.942880 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.964538 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.978509 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991119 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: E0218 14:24:01.991800 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991826 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: E0218 14:24:01.991889 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991898 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.992234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.992265 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.994277 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.996272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.996271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.017896 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.095956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198950 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.199037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.199640 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.202896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.203532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.203547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.221745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.428687 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" path="/var/lib/kubelet/pods/60a63b94-9b6f-4117-bd43-e7c7986f3824/volumes" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.429755 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" path="/var/lib/kubelet/pods/ba4c65b2-a3f9-446e-9807-bb2290d04b87/volumes" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.515864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.818246 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.873493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerStarted","Data":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.873537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerStarted","Data":"55bf56fc29bc6c5c7c73f1b370236bcbca1545fe9a2d06fed65e1f34bd49bd9b"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.875602 4739 generic.go:334] "Generic (PLEG): container finished" podID="1543620e-d684-4634-ba89-662f02f2b0e4" containerID="52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631" exitCode=0 Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.875675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerDied","Data":"52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878276 4739 generic.go:334] "Generic (PLEG): container finished" podID="69e98338-825d-4f76-833c-2e1ea807d942" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" exitCode=0 Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"64030588e2930d3d06f331c679500514142af233ae50cdca79cac3e5508cd8e1"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878493 4739 scope.go:117] "RemoveContainer" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.888305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.891872 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8918542350000003 podStartE2EDuration="2.891854235s" podCreationTimestamp="2026-02-18 14:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:02.886994803 +0000 UTC m=+1475.382715725" watchObservedRunningTime="2026-02-18 14:24:02.891854235 +0000 UTC m=+1475.387575157" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.920419 4739 scope.go:117] "RemoveContainer" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921242 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921322 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921369 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.922753 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs" (OuterVolumeSpecName: "logs") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.922911 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.926955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg" (OuterVolumeSpecName: "kube-api-access-rg8gg") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "kube-api-access-rg8gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.949667 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wg5zz" podStartSLOduration=4.420407398 podStartE2EDuration="19.949645416s" podCreationTimestamp="2026-02-18 14:23:43 +0000 UTC" firstStartedPulling="2026-02-18 14:23:46.828064035 +0000 UTC m=+1459.323784947" lastFinishedPulling="2026-02-18 14:24:02.357302043 +0000 UTC m=+1474.853022965" observedRunningTime="2026-02-18 14:24:02.934778863 +0000 UTC m=+1475.430499795" watchObservedRunningTime="2026-02-18 14:24:02.949645416 +0000 UTC m=+1475.445366338" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.977645 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.991982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data" (OuterVolumeSpecName: "config-data") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001305 4739 scope.go:117] "RemoveContainer" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.001918 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": container with ID starting with 7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459 not found: ID does not exist" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001975 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} err="failed to get container status \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": rpc error: code = NotFound desc = could not find container \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": container with ID starting with 7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459 not found: ID does not exist" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001997 4739 scope.go:117] "RemoveContainer" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.002232 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": container with ID starting with 149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944 not found: ID does not exist" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.002254 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} err="failed to get container status \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": rpc error: code = NotFound desc = could not find container \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": container with ID starting with 149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944 not found: ID does not exist" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025487 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025517 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025529 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: W0218 14:24:03.039772 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb3f59c_d6e1_4eb7_ad1d_75644646a2f9.slice/crio-bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc WatchSource:0}: Error finding container bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc: Status 404 returned error can't find the container with id bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.044299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.241599 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.261920 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.293826 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.294543 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294566 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.294616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294627 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294991 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.295016 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.297318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.302842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.309411 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363674 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465538 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465557 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.469243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.470661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.483125 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.626774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.959728 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.95970381 podStartE2EDuration="2.95970381s" podCreationTimestamp="2026-02-18 14:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:03.945068702 +0000 UTC m=+1476.440789644" watchObservedRunningTime="2026-02-18 14:24:03.95970381 +0000 UTC m=+1476.455424732" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.143612 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.382613 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.433094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e98338-825d-4f76-833c-2e1ea807d942" path="/var/lib/kubelet/pods/69e98338-825d-4f76-833c-2e1ea807d942/volumes" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.447183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.447222 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.525800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts" (OuterVolumeSpecName: "scripts") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.525841 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l" (OuterVolumeSpecName: "kube-api-access-hnk6l") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "kube-api-access-hnk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.549548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.557608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data" (OuterVolumeSpecName: "config-data") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611382 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611430 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611465 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611480 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.946974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerDied","Data":"36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.947290 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.947508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951776 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951813 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951822 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"f9a2e2a20257041f47da0dff019617b0952ac1e5137c62cf8adc4e7b636524d9"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.988038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9880145219999998 podStartE2EDuration="1.988014522s" podCreationTimestamp="2026-02-18 14:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:04.971138718 +0000 UTC m=+1477.466859650" watchObservedRunningTime="2026-02-18 14:24:04.988014522 +0000 UTC m=+1477.483735444" Feb 18 14:24:05 crc kubenswrapper[4739]: I0218 14:24:05.168960 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:24:05 crc kubenswrapper[4739]: I0218 14:24:05.571969 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:05 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:05 crc kubenswrapper[4739]: > Feb 18 14:24:06 crc kubenswrapper[4739]: I0218 14:24:06.320791 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:24:07 crc kubenswrapper[4739]: I0218 14:24:07.516247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:07 crc kubenswrapper[4739]: I0218 14:24:07.516679 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.557082 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.557695 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" containerID="cri-o://854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" gracePeriod=30 Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570164 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: E0218 14:24:09.570689 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570964 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.574154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.576783 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.576937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.577336 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.582379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674701 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.675047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.726718 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.727353 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" containerID="cri-o://9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" gracePeriod=30 Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.777757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.777968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.778036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.778083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.784694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.785816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.794830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.795754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.899842 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.011890 4739 generic.go:334] "Generic (PLEG): container finished" podID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerID="854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" exitCode=2 Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.011948 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerDied","Data":"854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f"} Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.013263 4739 generic.go:334] "Generic (PLEG): container finished" podID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerID="9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" exitCode=2 Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.013281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerDied","Data":"9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e"} Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.484622 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.510940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.590692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6" (OuterVolumeSpecName: "kube-api-access-ndzf6") pod "1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" (UID: "1d9742cc-1407-4631-a6ba-55fe1cc3fe4d"). InnerVolumeSpecName "kube-api-access-ndzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.638339 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.929600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.941601 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048026 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048405 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048452 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.060774 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerDied","Data":"2bc5886939c37fb1062674e7d0eff4b81f7f7a7b2294e0f4745de8bbbca3ba11"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.060837 4739 scope.go:117] "RemoveContainer" containerID="854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.061018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.069814 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l" (OuterVolumeSpecName: "kube-api-access-2xn7l") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "kube-api-access-2xn7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.076807 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerDied","Data":"7802eb786f9fd65a5a871491a73453af4c3e9308ab2608296cd37aed4159f91a"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.077121 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.090560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"8762dd17c92d0766d85297d3b8ff657afb0c476107270f6df46caae48fe9cee4"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.096870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.151815 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.156339 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.156370 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.157071 4739 scope.go:117] "RemoveContainer" containerID="9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.169839 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.195631 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: E0218 14:24:11.196379 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196398 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: E0218 14:24:11.196428 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196435 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196708 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196722 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.197648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.202838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.202867 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.320081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.361870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.375646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.463935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.463993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.464089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.464269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.467820 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.468030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.469657 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.472920 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.489927 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data" (OuterVolumeSpecName: "config-data") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.529500 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.567206 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.795862 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.810853 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.814605 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.832009 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.833776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.836237 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.836497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.886622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978758 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:11.995780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.009061 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.020021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.022273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.139195 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.178368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:24:12 crc kubenswrapper[4739]: W0218 14:24:12.414335 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e688eb1_895d_465e_b5d9_a7b7ba9f4650.slice/crio-2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e WatchSource:0}: Error finding container 2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e: Status 404 returned error can't find the container with id 2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.423337 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" path="/var/lib/kubelet/pods/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d/volumes" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.425167 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" path="/var/lib/kubelet/pods/4786d26d-b01e-4e3a-9407-81307b5a1433/volumes" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.426204 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.516097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.516572 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.836373 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837014 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" containerID="cri-o://62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837117 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" containerID="cri-o://c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837181 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" containerID="cri-o://207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837213 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" containerID="cri-o://3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" gracePeriod=30 Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.124435 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" exitCode=2 Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.124478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271"} Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.126251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e688eb1-895d-465e-b5d9-a7b7ba9f4650","Type":"ContainerStarted","Data":"2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e"} Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.535681 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.536212 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.627646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.627690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.865369 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.883327 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.148082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159281 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" exitCode=0 Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159313 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" exitCode=0 Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.160598 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8143c3df-5224-4095-a65f-f9f005913b61","Type":"ContainerStarted","Data":"dcec1e90c84500a3429b635b18aca2f1bf3f48cd9c5676bce48294227e9813df"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.710639 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.710644 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.183669 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" exitCode=0 Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.183964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956"} Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.499243 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:15 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:15 crc kubenswrapper[4739]: > Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.517477 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693712 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.694044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.695283 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.696342 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.700006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55" (OuterVolumeSpecName: "kube-api-access-7bz55") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "kube-api-access-7bz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.701221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts" (OuterVolumeSpecName: "scripts") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.729380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801487 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801528 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801540 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801552 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801563 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.812647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.845858 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data" (OuterVolumeSpecName: "config-data") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.904322 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.904361 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"633d577ca0d7c26b5d575a55a4d77d6216b341dedf226f7656b21d39f19c64e4"} Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197917 4739 scope.go:117] "RemoveContainer" containerID="c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197421 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.200489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e688eb1-895d-465e-b5d9-a7b7ba9f4650","Type":"ContainerStarted","Data":"7e94e110933254a8f49a8743c9a2da7631a04ab7a4c23f8767ba001ebb44a0bd"} Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.200934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.231930 4739 scope.go:117] "RemoveContainer" containerID="207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.243478 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.072622869 podStartE2EDuration="5.243435028s" podCreationTimestamp="2026-02-18 14:24:11 +0000 UTC" firstStartedPulling="2026-02-18 14:24:12.780517737 +0000 UTC m=+1485.276238659" lastFinishedPulling="2026-02-18 14:24:14.951329896 +0000 UTC m=+1487.447050818" observedRunningTime="2026-02-18 14:24:16.222694346 +0000 UTC m=+1488.718415278" watchObservedRunningTime="2026-02-18 14:24:16.243435028 +0000 UTC m=+1488.739155960" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.267552 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.273799 4739 scope.go:117] "RemoveContainer" containerID="3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.282125 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301328 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.301935 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.301980 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301986 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.302010 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302016 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.302029 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302035 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302271 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302306 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302315 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302884 4739 scope.go:117] "RemoveContainer" containerID="62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.305180 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307490 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307690 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.322857 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421066 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421473 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421846 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.426438 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" path="/var/lib/kubelet/pods/1ea8be82-c714-4993-b2c0-7af4a7fde0d3/volumes" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524828 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.525181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.525593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.530313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.530901 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.531964 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.532549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.534145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.564193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.632071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:17 crc kubenswrapper[4739]: I0218 14:24:17.213617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8143c3df-5224-4095-a65f-f9f005913b61","Type":"ContainerStarted","Data":"e607278779921b7daa7d5089f3d9fd4d4c9965b020122527d9641a5e0f0f5f29"} Feb 18 14:24:17 crc kubenswrapper[4739]: I0218 14:24:17.238718 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.274881284 podStartE2EDuration="6.238694097s" podCreationTimestamp="2026-02-18 14:24:11 +0000 UTC" firstStartedPulling="2026-02-18 14:24:13.871455371 +0000 UTC m=+1486.367176293" lastFinishedPulling="2026-02-18 14:24:15.835268184 +0000 UTC m=+1488.330989106" observedRunningTime="2026-02-18 14:24:17.230713646 +0000 UTC m=+1489.726434568" watchObservedRunningTime="2026-02-18 14:24:17.238694097 +0000 UTC m=+1489.734415029" Feb 18 14:24:19 crc kubenswrapper[4739]: I0218 14:24:19.247297 4739 generic.go:334] "Generic (PLEG): container finished" podID="60d51f11-fba7-4368-9665-198dca1f9adc" containerID="4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" exitCode=137 Feb 18 14:24:19 crc kubenswrapper[4739]: I0218 14:24:19.247611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerDied","Data":"4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543"} Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.704865 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.756595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.762268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp" (OuterVolumeSpecName: "kube-api-access-vjqbp") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "kube-api-access-vjqbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.800882 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.801520 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data" (OuterVolumeSpecName: "config-data") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861648 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861689 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861704 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.271915 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerDied","Data":"5e425dc81372bc58ea5a732a114720e008b75f79e58f406fbae181589aeba1b6"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273367 4739 scope.go:117] "RemoveContainer" containerID="4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273992 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.275152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"3cb69177aa55275b8d9b6fef13b5aac13b6cdb36cddbb51be35d3b65d87e5c5e"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.365595 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.379364 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.394711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: E0218 14:24:21.395417 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.395462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.395790 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.396910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.399921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.400138 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.400600 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.410105 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479422 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583470 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.594130 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.605959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.721273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.860356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.030262 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.307185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec"} Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.312331 4739 generic.go:334] "Generic (PLEG): container finished" podID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerID="f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6" exitCode=0 Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.312374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerDied","Data":"f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6"} Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.443919 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" path="/var/lib/kubelet/pods/60d51f11-fba7-4368-9665-198dca1f9adc/volumes" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.523751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.533939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.537004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.650642 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.429007 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.480894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.510022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea00e513-02cf-4951-b9ec-50966f982142","Type":"ContainerStarted","Data":"4c11aae1340e8ea51386680a86fd23bee84c424184f4f4a1a025c61c2ac3f6e2"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.510137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea00e513-02cf-4951-b9ec-50966f982142","Type":"ContainerStarted","Data":"716709db730350454c6b698e80462da06d5fdd95d2f7ebf36e70feed7f8aa3a0"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.548038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.54801115 podStartE2EDuration="2.54801115s" podCreationTimestamp="2026-02-18 14:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:23.535052884 +0000 UTC m=+1496.030773826" watchObservedRunningTime="2026-02-18 14:24:23.54801115 +0000 UTC m=+1496.043732092" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.551687 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.637770 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.639696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.642880 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.651929 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.072600 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229472 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229664 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.271024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768" (OuterVolumeSpecName: "kube-api-access-l7768") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "kube-api-access-l7768". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.273230 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts" (OuterVolumeSpecName: "scripts") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.315610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data" (OuterVolumeSpecName: "config-data") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.319352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334307 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334353 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334369 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334382 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.454749 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:24 crc kubenswrapper[4739]: E0218 14:24:24.459186 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.459243 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.459700 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.461498 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.482037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerDied","Data":"a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991"} Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528837 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528685 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.539321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf"} Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.539398 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.544000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.641951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.642100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.642335 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746238 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.771304 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.807727 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.807884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.823486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.835181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.924046 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960673 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.963476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.963917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.964090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071679 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.072236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.092099 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.092269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.201383 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.540475 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:25 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:25 crc kubenswrapper[4739]: > Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.345508 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:26 crc kubenswrapper[4739]: W0218 14:24:26.354568 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffa018e5_ca81_4d0e_86f7_a9c6fb25fdd0.slice/crio-48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671 WatchSource:0}: Error finding container 48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671: Status 404 returned error can't find the container with id 48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671 Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.360887 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.580396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerStarted","Data":"de253019cab38f430ba5baf38246bca706fcc962369cf21cb7d0dd554226a189"} Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.587196 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0","Type":"ContainerStarted","Data":"48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671"} Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.721891 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.314302 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602879 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" containerID="cri-o://941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602889 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" containerID="cri-o://7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602941 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" containerID="cri-o://02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.603029 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" containerID="cri-o://5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.612169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.612457 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.616515 4739 generic.go:334] "Generic (PLEG): container finished" podID="107ff6da-f0af-471c-bfaf-08364992c44e" containerID="0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a" exitCode=0 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.616642 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.620396 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" containerID="cri-o://c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0","Type":"ContainerStarted","Data":"4138276df8d1c07cc1007c7db31945d039be4e64d329aa76cf8b93546fa4145e"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621184 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621245 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" containerID="cri-o://8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.651225 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.52332394 podStartE2EDuration="18.651208954s" podCreationTimestamp="2026-02-18 14:24:09 +0000 UTC" firstStartedPulling="2026-02-18 14:24:10.949204041 +0000 UTC m=+1483.444924963" lastFinishedPulling="2026-02-18 14:24:26.077089045 +0000 UTC m=+1498.572809977" observedRunningTime="2026-02-18 14:24:27.642169336 +0000 UTC m=+1500.137890268" watchObservedRunningTime="2026-02-18 14:24:27.651208954 +0000 UTC m=+1500.146929876" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.690536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.37799336 podStartE2EDuration="11.690516703s" podCreationTimestamp="2026-02-18 14:24:16 +0000 UTC" firstStartedPulling="2026-02-18 14:24:20.762567131 +0000 UTC m=+1493.258288053" lastFinishedPulling="2026-02-18 14:24:26.075090474 +0000 UTC m=+1498.570811396" observedRunningTime="2026-02-18 14:24:27.677766402 +0000 UTC m=+1500.173487324" watchObservedRunningTime="2026-02-18 14:24:27.690516703 +0000 UTC m=+1500.186237625" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.764844 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.764824844 podStartE2EDuration="3.764824844s" podCreationTimestamp="2026-02-18 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:27.755280063 +0000 UTC m=+1500.251000995" watchObservedRunningTime="2026-02-18 14:24:27.764824844 +0000 UTC m=+1500.260545766" Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.633026 4739 generic.go:334] "Generic (PLEG): container finished" podID="1abac962-efca-4430-8a58-ab62a802c442" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" exitCode=143 Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.633126 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.635558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerStarted","Data":"6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.635870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.637608 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" exitCode=0 Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.637797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.661566 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" podStartSLOduration=4.661543194 podStartE2EDuration="4.661543194s" podCreationTimestamp="2026-02-18 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:28.656004234 +0000 UTC m=+1501.151725176" watchObservedRunningTime="2026-02-18 14:24:28.661543194 +0000 UTC m=+1501.157264116" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.372886 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.373192 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.373241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.374095 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.374150 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" gracePeriod=600 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.491296 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:29 crc kubenswrapper[4739]: E0218 14:24:29.516655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.652056 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" exitCode=0 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.652117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654394 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" exitCode=0 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654452 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654510 4739 scope.go:117] "RemoveContainer" containerID="d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654674 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" containerID="cri-o://e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654710 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" containerID="cri-o://4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654779 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" containerID="cri-o://d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654819 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" containerID="cri-o://e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.655352 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:29 crc kubenswrapper[4739]: E0218 14:24:29.655647 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:30 crc kubenswrapper[4739]: E0218 14:24:30.442179 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85906c1a_8b4b_4859_a6dc_08dd07710f2a.slice/crio-conmon-d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682822 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682851 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" exitCode=2 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682859 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682963 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.685793 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.685847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.554338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627434 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627626 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.628018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs" (OuterVolumeSpecName: "logs") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.628239 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.633604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf" (OuterVolumeSpecName: "kube-api-access-wqqqf") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "kube-api-access-wqqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.670291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.674917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data" (OuterVolumeSpecName: "config-data") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706859 4739 generic.go:334] "Generic (PLEG): container finished" podID="1abac962-efca-4430-8a58-ab62a802c442" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" exitCode=0 Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"f9a2e2a20257041f47da0dff019617b0952ac1e5137c62cf8adc4e7b636524d9"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706949 4739 scope.go:117] "RemoveContainer" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.707088 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.722340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733422 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733476 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733488 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.769241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.784420 4739 scope.go:117] "RemoveContainer" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.788096 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.801870 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819352 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.819926 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819949 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.819969 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819976 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.822424 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.822470 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.823932 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.838796 4739 scope.go:117] "RemoveContainer" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.843812 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": container with ID starting with 8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed not found: ID does not exist" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.844087 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} err="failed to get container status \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": rpc error: code = NotFound desc = could not find container \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": container with ID starting with 8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed not found: ID does not exist" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.844115 4739 scope.go:117] "RemoveContainer" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.850877 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": container with ID starting with c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730 not found: ID does not exist" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.850924 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} err="failed to get container status \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": rpc error: code = NotFound desc = could not find container \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": container with ID starting with c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730 not found: ID does not exist" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.854699 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939361 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.044174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.047919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.048279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.048560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.065852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.066178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.151941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.433592 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abac962-efca-4430-8a58-ab62a802c442" path="/var/lib/kubelet/pods/1abac962-efca-4430-8a58-ab62a802c442/volumes" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.670874 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.720112 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"f916a10a472599240a0b09bda183874925aa520b59c1c803a4b2bd0281891f10"} Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.741800 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.746883 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" exitCode=0 Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.747008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.751838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.751889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.784188 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.784162717 podStartE2EDuration="2.784162717s" podCreationTimestamp="2026-02-18 14:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:33.770027752 +0000 UTC m=+1506.265748694" watchObservedRunningTime="2026-02-18 14:24:33.784162717 +0000 UTC m=+1506.279883639" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.068837 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104114 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.106110 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.106485 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.112108 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq" (OuterVolumeSpecName: "kube-api-access-xsfrq") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "kube-api-access-xsfrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.113664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts" (OuterVolumeSpecName: "scripts") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.154740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.196274 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207333 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207364 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207376 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207386 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207396 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207404 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.238001 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.306672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data" (OuterVolumeSpecName: "config-data") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.310063 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.310095 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.768750 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.769872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"3cb69177aa55275b8d9b6fef13b5aac13b6cdb36cddbb51be35d3b65d87e5c5e"} Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.769911 4739 scope.go:117] "RemoveContainer" containerID="e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.802893 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.803555 4739 scope.go:117] "RemoveContainer" containerID="4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.816488 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.826737 4739 scope.go:117] "RemoveContainer" containerID="d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.850499 4739 scope.go:117] "RemoveContainer" containerID="e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867385 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867902 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867924 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867946 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.868009 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868014 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868196 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868215 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868236 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.870337 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.872265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.876002 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.880151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.880471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925636 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.926598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.029967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030257 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030888 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.031204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036786 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.038399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.041535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.051575 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.124914 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.202938 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.216007 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.352872 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.353401 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" containerID="cri-o://94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" gracePeriod=10 Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.515287 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:35 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:35 crc kubenswrapper[4739]: > Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.806554 4739 generic.go:334] "Generic (PLEG): container finished" podID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerID="94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" exitCode=0 Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.806594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc"} Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.836631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.903909 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.905621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.913909 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.914118 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.928926 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963429 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.002439 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066567 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066679 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066794 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067090 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067816 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.074773 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.088437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.095879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.114215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.118735 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd" (OuterVolumeSpecName: "kube-api-access-p7bkd") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "kube-api-access-p7bkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.167408 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.171326 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.171372 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.174604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.175100 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.186720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.212004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config" (OuterVolumeSpecName: "config") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.232730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273724 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273762 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273774 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273786 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.436687 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" path="/var/lib/kubelet/pods/85906c1a-8b4b-4859-a6dc-08dd07710f2a/volumes" Feb 18 14:24:36 crc kubenswrapper[4739]: W0218 14:24:36.795398 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147cff80_30af_4fc7_961f_5f6e17af51bb.slice/crio-7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096 WatchSource:0}: Error finding container 7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096: Status 404 returned error can't find the container with id 7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096 Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.802265 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825057 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825116 4739 scope.go:117] "RemoveContainer" containerID="94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825242 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.830307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.830352 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"238ba6fcba3c9aab1b9b714ffc70c837313da0593e88c1516a48844a82ac9503"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.832597 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerStarted","Data":"7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.859652 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.871841 4739 scope.go:117] "RemoveContainer" containerID="21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.872121 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:37 crc kubenswrapper[4739]: I0218 14:24:37.847906 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} Feb 18 14:24:37 crc kubenswrapper[4739]: I0218 14:24:37.851521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerStarted","Data":"719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930"} Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.436979 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" path="/var/lib/kubelet/pods/cb3e9cc3-348e-4556-89a2-ea261dd47147/volumes" Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.471526 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mvdqm" podStartSLOduration=3.4715009070000002 podStartE2EDuration="3.471500907s" podCreationTimestamp="2026-02-18 14:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:37.873459025 +0000 UTC m=+1510.369179957" watchObservedRunningTime="2026-02-18 14:24:38.471500907 +0000 UTC m=+1510.967221829" Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.867047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.895788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.896359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.935067 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.277022893 podStartE2EDuration="6.935044823s" podCreationTimestamp="2026-02-18 14:24:34 +0000 UTC" firstStartedPulling="2026-02-18 14:24:35.89508807 +0000 UTC m=+1508.390808992" lastFinishedPulling="2026-02-18 14:24:40.55311 +0000 UTC m=+1513.048830922" observedRunningTime="2026-02-18 14:24:40.923847041 +0000 UTC m=+1513.419567963" watchObservedRunningTime="2026-02-18 14:24:40.935044823 +0000 UTC m=+1513.430765765" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.152318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.152991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.414827 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:42 crc kubenswrapper[4739]: E0218 14:24:42.415203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.924941 4739 generic.go:334] "Generic (PLEG): container finished" podID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerID="719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930" exitCode=0 Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.924991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerDied","Data":"719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930"} Feb 18 14:24:43 crc kubenswrapper[4739]: I0218 14:24:43.171702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:43 crc kubenswrapper[4739]: I0218 14:24:43.171702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.499338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.610556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts" (OuterVolumeSpecName: "scripts") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.610568 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7" (OuterVolumeSpecName: "kube-api-access-p9ld7") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "kube-api-access-p9ld7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.629957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.632672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data" (OuterVolumeSpecName: "config-data") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695352 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695389 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695403 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695435 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerDied","Data":"7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096"} Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951942 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951984 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237334 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237832 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" containerID="cri-o://4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237905 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" containerID="cri-o://fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.272339 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.272626 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" containerID="cri-o://8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337569 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337942 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" containerID="cri-o://82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337865 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" containerID="cri-o://9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.510053 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:45 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:45 crc kubenswrapper[4739]: > Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.965038 4739 generic.go:334] "Generic (PLEG): container finished" podID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerID="9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" exitCode=143 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.965150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176"} Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.967262 4739 generic.go:334] "Generic (PLEG): container finished" podID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" exitCode=143 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.967297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.320985 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.329968 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.334790 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.334857 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.448603 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.571899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.572101 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.572285 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.578896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6" (OuterVolumeSpecName: "kube-api-access-dhdw6") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "kube-api-access-dhdw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.610120 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data" (OuterVolumeSpecName: "config-data") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.612747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675916 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675957 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675973 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989263 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" exitCode=0 Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989331 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989352 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerDied","Data":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989646 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerDied","Data":"55bf56fc29bc6c5c7c73f1b370236bcbca1545fe9a2d06fed65e1f34bd49bd9b"} Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989667 4739 scope.go:117] "RemoveContainer" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.026832 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.030272 4739 scope.go:117] "RemoveContainer" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.030845 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": container with ID starting with 8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7 not found: ID does not exist" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.030889 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} err="failed to get container status \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": rpc error: code = NotFound desc = could not find container \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": container with ID starting with 8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7 not found: ID does not exist" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.038833 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.064534 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065555 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="init" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065580 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="init" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065597 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065605 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065672 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065683 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065713 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066143 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066175 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066225 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.071860 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.080236 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.098050 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.192759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.193417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.193972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.296751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.296959 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.297071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.303998 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.315041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.315741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.411112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.426524 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" path="/var/lib/kubelet/pods/2c9cba7f-9b49-4413-a546-9ecf1950d543/volumes" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.476986 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:51350->10.217.0.250:8775: read: connection reset by peer" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.477164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:51352->10.217.0.250:8775: read: connection reset by peer" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.876271 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006415 4739 generic.go:334] "Generic (PLEG): container finished" podID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" exitCode=0 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006527 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006556 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"f916a10a472599240a0b09bda183874925aa520b59c1c803a4b2bd0281891f10"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006581 4739 scope.go:117] "RemoveContainer" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012677 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012772 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.013099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.013201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs" (OuterVolumeSpecName: "logs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014518 4739 generic.go:334] "Generic (PLEG): container finished" podID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerID="82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" exitCode=0 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.015006 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.019371 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx" (OuterVolumeSpecName: "kube-api-access-mksjx") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "kube-api-access-mksjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.019407 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.062972 4739 scope.go:117] "RemoveContainer" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.072594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.074617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data" (OuterVolumeSpecName: "config-data") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.097901 4739 scope.go:117] "RemoveContainer" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.101281 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": container with ID starting with fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2 not found: ID does not exist" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.101343 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} err="failed to get container status \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": rpc error: code = NotFound desc = could not find container \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": container with ID starting with fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2 not found: ID does not exist" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.101378 4739 scope.go:117] "RemoveContainer" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.102094 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": container with ID starting with 4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0 not found: ID does not exist" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.102292 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} err="failed to get container status \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": rpc error: code = NotFound desc = could not find container \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": container with ID starting with 4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0 not found: ID does not exist" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.105089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123222 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123289 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123320 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.124328 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: W0218 14:24:49.129941 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba769c63_86fa_4971_afd8_4e3a57c94c37.slice/crio-7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8 WatchSource:0}: Error finding container 7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8: Status 404 returned error can't find the container with id 7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.147764 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.154454 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226153 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226543 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.227093 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs" (OuterVolumeSpecName: "logs") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.227638 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.228578 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.228603 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.231632 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct" (OuterVolumeSpecName: "kube-api-access-kxkct") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "kube-api-access-kxkct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.269769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.281502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data" (OuterVolumeSpecName: "config-data") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.330782 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.331104 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.331117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.343504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.433392 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.506073 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.524887 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.538634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539249 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539279 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539287 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539314 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539344 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539353 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539658 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539684 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539707 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539722 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.541122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544604 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544909 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.553798 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739786 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.743731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.757313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.862575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.028137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba769c63-86fa-4971-afd8-4e3a57c94c37","Type":"ContainerStarted","Data":"1636797c88c7bca5dee0562720c817ad4b49b23532aeac6a2f073a39b49226f8"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.028180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba769c63-86fa-4971-afd8-4e3a57c94c37","Type":"ContainerStarted","Data":"7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.039904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.039977 4739 scope.go:117] "RemoveContainer" containerID="82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.040026 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.086683 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.086640823 podStartE2EDuration="2.086640823s" podCreationTimestamp="2026-02-18 14:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:50.044875162 +0000 UTC m=+1522.540596084" watchObservedRunningTime="2026-02-18 14:24:50.086640823 +0000 UTC m=+1522.582361735" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.140997 4739 scope.go:117] "RemoveContainer" containerID="9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.154681 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.175844 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.189407 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.191203 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.193391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.193500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.214499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364268 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364467 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: W0218 14:24:50.396649 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3797374a_f0e4_4ba5_8974_c0049bad543a.slice/crio-b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97 WatchSource:0}: Error finding container b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97: Status 404 returned error can't find the container with id b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97 Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.399695 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.428073 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" path="/var/lib/kubelet/pods/61e22e5d-021a-404b-b763-cf02d6f2bc9e/volumes" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.429059 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" path="/var/lib/kubelet/pods/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9/volumes" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.467638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.467798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.468103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.473545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.474851 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.477539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.486698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.519829 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.028706 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.055873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"b7949bf0504636ba7470d86467b8f7a73f72aaed74e2bece861ff361637d8ca6"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.060911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"63b5fc512db3014a7d27150983656813bfb6384f0c18e481a78d2d5a2cf9e2de"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.060962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.094807 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.094782008 podStartE2EDuration="2.094782008s" podCreationTimestamp="2026-02-18 14:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:51.083920044 +0000 UTC m=+1523.579640986" watchObservedRunningTime="2026-02-18 14:24:51.094782008 +0000 UTC m=+1523.590502940" Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.086550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"02ec586e36f7939cda1f715fd21a9c1aac1cb9c54f06b99b38b45d3c69507700"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.087184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"168c96ecd94121fa27a50b0fd7a3cbd831d0d9dc5f7694db3143ce6d5c7a4ac4"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.091752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"9a3e3811d24c3d72675df149801c40c079f95efb8af67e161c27273ea4b83485"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.121941 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.12191616 podStartE2EDuration="2.12191616s" podCreationTimestamp="2026-02-18 14:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:52.10998172 +0000 UTC m=+1524.605702662" watchObservedRunningTime="2026-02-18 14:24:52.12191616 +0000 UTC m=+1524.617637082" Feb 18 14:24:53 crc kubenswrapper[4739]: I0218 14:24:53.410355 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:53 crc kubenswrapper[4739]: E0218 14:24:53.410964 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:53 crc kubenswrapper[4739]: I0218 14:24:53.411179 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.495727 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:55 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:55 crc kubenswrapper[4739]: > Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.520471 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.520839 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.148964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.163955 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" exitCode=137 Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164007 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"8762dd17c92d0766d85297d3b8ff657afb0c476107270f6df46caae48fe9cee4"} Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164232 4739 scope.go:117] "RemoveContainer" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.232359 4739 scope.go:117] "RemoveContainer" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.273282 4739 scope.go:117] "RemoveContainer" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.274918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275269 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.283218 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts" (OuterVolumeSpecName: "scripts") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.300536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt" (OuterVolumeSpecName: "kube-api-access-hmddt") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "kube-api-access-hmddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.379404 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.379464 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.435792 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.457282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.457718 4739 scope.go:117] "RemoveContainer" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.467823 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data" (OuterVolumeSpecName: "config-data") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.472293 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.484713 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.484989 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.509863 4739 scope.go:117] "RemoveContainer" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.510381 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": container with ID starting with 7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db not found: ID does not exist" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510409 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} err="failed to get container status \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": rpc error: code = NotFound desc = could not find container \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": container with ID starting with 7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510430 4739 scope.go:117] "RemoveContainer" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.510760 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": container with ID starting with 02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d not found: ID does not exist" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510811 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} err="failed to get container status \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": rpc error: code = NotFound desc = could not find container \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": container with ID starting with 02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510840 4739 scope.go:117] "RemoveContainer" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.511110 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": container with ID starting with 5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3 not found: ID does not exist" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511140 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} err="failed to get container status \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": rpc error: code = NotFound desc = could not find container \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": container with ID starting with 5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3 not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511160 4739 scope.go:117] "RemoveContainer" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.511366 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": container with ID starting with 941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5 not found: ID does not exist" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511390 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} err="failed to get container status \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": rpc error: code = NotFound desc = could not find container \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": container with ID starting with 941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5 not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.812126 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.836158 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854205 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854784 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854804 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854816 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854823 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854862 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854871 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854881 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855134 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855159 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855171 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.857422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.859949 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860568 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.872888 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.107341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.107920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108425 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.123706 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.198609 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.216894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:24:59 crc kubenswrapper[4739]: W0218 14:24:59.734777 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7f699b8_95a0_4a37_8a9b_fb4bd7b46d3e.slice/crio-4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193 WatchSource:0}: Error finding container 4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193: Status 404 returned error can't find the container with id 4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193 Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.737763 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.863283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.863363 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.199185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193"} Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.426486 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" path="/var/lib/kubelet/pods/42803b7f-4360-4d79-94e6-ab17944142ab/volumes" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.522595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.522971 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.883652 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3797374a-f0e4-4ba5-8974-c0049bad543a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.883742 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3797374a-f0e4-4ba5-8974-c0049bad543a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.212021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.534632 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab30c1a-7b94-430a-ac85-ebe051fadbfe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.534813 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab30c1a-7b94-430a-ac85-ebe051fadbfe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:02 crc kubenswrapper[4739]: I0218 14:25:02.227360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} Feb 18 14:25:03 crc kubenswrapper[4739]: I0218 14:25:03.363691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} Feb 18 14:25:04 crc kubenswrapper[4739]: I0218 14:25:04.380509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} Feb 18 14:25:04 crc kubenswrapper[4739]: I0218 14:25:04.410960 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.930595528 podStartE2EDuration="6.410940657s" podCreationTimestamp="2026-02-18 14:24:58 +0000 UTC" firstStartedPulling="2026-02-18 14:24:59.737422087 +0000 UTC m=+1532.233143009" lastFinishedPulling="2026-02-18 14:25:03.217767216 +0000 UTC m=+1535.713488138" observedRunningTime="2026-02-18 14:25:04.404250399 +0000 UTC m=+1536.899971341" watchObservedRunningTime="2026-02-18 14:25:04.410940657 +0000 UTC m=+1536.906661579" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.241618 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.431203 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:05 crc kubenswrapper[4739]: E0218 14:25:05.431595 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.524540 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:25:05 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:25:05 crc kubenswrapper[4739]: > Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.174032 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.178645 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.222665 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.356616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.392275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.503527 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.056028 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497596 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee" exitCode=0 Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497655 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee"} Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"7c5472457e574250ce229e71104c3d275504739870445d71eac020fa408b2be9"} Feb 18 14:25:08 crc kubenswrapper[4739]: I0218 14:25:08.510925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e"} Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.914595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.915226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.928529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.938124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.527166 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.531900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.533903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.536257 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.549785 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.544660 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e" exitCode=0 Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.545354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e"} Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.550486 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.551032 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.553811 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.558506 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051"} Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.570368 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.631008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9dsf" podStartSLOduration=2.010775503 podStartE2EDuration="6.630983901s" podCreationTimestamp="2026-02-18 14:25:06 +0000 UTC" firstStartedPulling="2026-02-18 14:25:07.499661518 +0000 UTC m=+1539.995382440" lastFinishedPulling="2026-02-18 14:25:12.119869916 +0000 UTC m=+1544.615590838" observedRunningTime="2026-02-18 14:25:12.623127563 +0000 UTC m=+1545.118848495" watchObservedRunningTime="2026-02-18 14:25:12.630983901 +0000 UTC m=+1545.126704823" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.726826 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.727822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.728178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.831389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.831544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.856282 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.873143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:13 crc kubenswrapper[4739]: I0218 14:25:13.653351 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:13 crc kubenswrapper[4739]: W0218 14:25:13.653831 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51 WatchSource:0}: Error finding container 2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51: Status 404 returned error can't find the container with id 2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51 Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.508137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.572221 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.628740 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df" exitCode=0 Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.629927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df"} Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.630011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51"} Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.504116 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.504712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.563343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.664823 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1"} Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.750314 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.750625 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" containerID="cri-o://efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" gracePeriod=2 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.687178 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1" exitCode=0 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.687250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1"} Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.693355 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" exitCode=0 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.693398 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.037014 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180585 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.182184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities" (OuterVolumeSpecName: "utilities") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.187562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm" (OuterVolumeSpecName: "kube-api-access-724gm") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "kube-api-access-724gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.283213 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.283249 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.314246 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.386035 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.706997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710232 4739 scope.go:117] "RemoveContainer" containerID="efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710277 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.735129 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d2qcv" podStartSLOduration=3.284026877 podStartE2EDuration="6.735103679s" podCreationTimestamp="2026-02-18 14:25:12 +0000 UTC" firstStartedPulling="2026-02-18 14:25:14.633728459 +0000 UTC m=+1547.129449381" lastFinishedPulling="2026-02-18 14:25:18.084805261 +0000 UTC m=+1550.580526183" observedRunningTime="2026-02-18 14:25:18.729700683 +0000 UTC m=+1551.225421645" watchObservedRunningTime="2026-02-18 14:25:18.735103679 +0000 UTC m=+1551.230824601" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.755851 4739 scope.go:117] "RemoveContainer" containerID="0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.772144 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.786978 4739 scope.go:117] "RemoveContainer" containerID="6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.789239 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:20 crc kubenswrapper[4739]: I0218 14:25:20.412303 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:20 crc kubenswrapper[4739]: E0218 14:25:20.413046 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:20 crc kubenswrapper[4739]: I0218 14:25:20.424653 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" path="/var/lib/kubelet/pods/0bbaed51-382b-4b1b-8b3f-95521f415472/volumes" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.873865 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.875338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.930592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.006484 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.017691 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073482 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.073964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-utilities" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073982 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-utilities" Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.073993 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073999 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.074029 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-content" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.074035 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-content" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.074310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.075305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.099619 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.211431 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.211747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.212104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.320681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.331000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.341023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.399532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.863356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.892700 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: W0218 14:25:23.895916 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e0a952f_ef12_46c6_8ca8_10f016b441be.slice/crio-254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a WatchSource:0}: Error finding container 254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a: Status 404 returned error can't find the container with id 254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.553378 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" path="/var/lib/kubelet/pods/3edd4390-e376-469a-b7c5-9bd7bf9dd100/volumes" Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.813098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerStarted","Data":"254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a"} Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.945041 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.768424 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769363 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" containerID="cri-o://3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769523 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" containerID="cri-o://6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769573 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" containerID="cri-o://9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769609 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" containerID="cri-o://251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.885446 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.983485 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.623993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845827 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" exitCode=0 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845868 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" exitCode=2 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845880 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" exitCode=0 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846091 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d2qcv" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" containerID="cri-o://2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" gracePeriod=2 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.360292 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.361984 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9dsf" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" containerID="cri-o://22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" gracePeriod=2 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.894415 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" exitCode=0 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.894591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.899988 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" exitCode=0 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900052 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900063 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51" Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.946478 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.006610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities" (OuterVolumeSpecName: "utilities") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.033991 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz" (OuterVolumeSpecName: "kube-api-access-9pkwz") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "kube-api-access-9pkwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.072780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109792 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109837 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109849 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.962593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.964994 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.965083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"7c5472457e574250ce229e71104c3d275504739870445d71eac020fa408b2be9"} Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.965153 4739 scope.go:117] "RemoveContainer" containerID="22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.059037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.065974 4739 scope.go:117] "RemoveContainer" containerID="a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.070373 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities" (OuterVolumeSpecName: "utilities") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.086067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg" (OuterVolumeSpecName: "kube-api-access-fzcpg") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "kube-api-access-fzcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.127613 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.152837 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.152867 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.206707 4739 scope.go:117] "RemoveContainer" containerID="4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.207202 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.259370 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.984877 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.032883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.051444 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.347740 4739 scope.go:117] "RemoveContainer" containerID="81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.411598 4739 scope.go:117] "RemoveContainer" containerID="edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.434527 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" path="/var/lib/kubelet/pods/0a62b266-b24d-47e5-ae8d-cb8524e1d628/volumes" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.438496 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" path="/var/lib/kubelet/pods/8fa37c4d-3105-4641-8568-f29938b5cecc/volumes" Feb 18 14:25:30 crc kubenswrapper[4739]: E0218 14:25:30.521281 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:31 crc kubenswrapper[4739]: I0218 14:25:31.930082 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024642 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" exitCode=0 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.025054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"238ba6fcba3c9aab1b9b714ffc70c837313da0593e88c1516a48844a82ac9503"} Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.025087 4739 scope.go:117] "RemoveContainer" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024927 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030656 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030735 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030823 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.031325 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.031484 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.034874 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.050067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts" (OuterVolumeSpecName: "scripts") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.100946 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9" (OuterVolumeSpecName: "kube-api-access-77zc9") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "kube-api-access-77zc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159572 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159634 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159648 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.187748 4739 scope.go:117] "RemoveContainer" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.189406 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" containerID="cri-o://1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" gracePeriod=604794 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.228214 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.235749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.235771 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" containerID="cri-o://3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" gracePeriod=604794 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.272989 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.273031 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.296665 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data" (OuterVolumeSpecName: "config-data") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.301620 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.328041 4739 scope.go:117] "RemoveContainer" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.372301 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.375687 4739 scope.go:117] "RemoveContainer" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.376006 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.376038 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.394159 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.428996 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" path="/var/lib/kubelet/pods/4106c506-1336-4121-a8d7-90fe333ce3df/volumes" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.429952 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430411 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430428 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430446 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430470 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430486 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430496 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430508 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430516 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430525 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430532 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430572 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430588 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430604 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430610 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430625 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430653 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430660 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430854 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430864 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430881 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430894 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430928 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430943 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.441175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444581 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444809 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.464739 4739 scope.go:117] "RemoveContainer" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.465765 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": container with ID starting with 6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be not found: ID does not exist" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.465796 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} err="failed to get container status \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": rpc error: code = NotFound desc = could not find container \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": container with ID starting with 6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.465821 4739 scope.go:117] "RemoveContainer" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.466147 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": container with ID starting with 9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7 not found: ID does not exist" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466222 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} err="failed to get container status \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": rpc error: code = NotFound desc = could not find container \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": container with ID starting with 9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7 not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466274 4739 scope.go:117] "RemoveContainer" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.466917 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": container with ID starting with 251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd not found: ID does not exist" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466975 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} err="failed to get container status \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": rpc error: code = NotFound desc = could not find container \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": container with ID starting with 251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.467020 4739 scope.go:117] "RemoveContainer" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.467398 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": container with ID starting with 3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13 not found: ID does not exist" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.467429 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} err="failed to get container status \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": rpc error: code = NotFound desc = could not find container \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": container with ID starting with 3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13 not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.471219 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588871 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589599 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692860 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.693871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.694050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.698758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.699412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.699916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.700110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.703245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.711899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.748877 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.760176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.215006 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.286236 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.356724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:34 crc kubenswrapper[4739]: I0218 14:25:34.057051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"ff5543db541b8d9ceb32c87a5b1108377bedd8a766d344ce85931e1103feec8e"} Feb 18 14:25:35 crc kubenswrapper[4739]: I0218 14:25:35.410688 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:35 crc kubenswrapper[4739]: E0218 14:25:35.411161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.168161 4739 generic.go:334] "Generic (PLEG): container finished" podID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerID="1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" exitCode=0 Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.168373 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403"} Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.171274 4739 generic.go:334] "Generic (PLEG): container finished" podID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerID="3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" exitCode=0 Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.171322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d"} Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.520894 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.523568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.526821 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.545722 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735851 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735877 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.736017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.736063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.739281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.742938 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.763461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.861082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:43 crc kubenswrapper[4739]: E0218 14:25:43.180198 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:43 crc kubenswrapper[4739]: I0218 14:25:43.214717 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.101825 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.109132 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221370 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221514 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221887 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222660 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222733 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222760 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.228262 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.228672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.231196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.233919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.235978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.236038 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info" (OuterVolumeSpecName: "pod-info") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.245416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv" (OuterVolumeSpecName: "kube-api-access-rf5kv") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "kube-api-access-rf5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.293962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"d4d2f4d954b6b105d9d4d012df3327d247d4b0d91bb0c3076d3bbe9f637b4cc0"} Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.294010 4739 scope.go:117] "RemoveContainer" containerID="3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.294159 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.324717 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091" (OuterVolumeSpecName: "persistence") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.325351 4739 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json]: open /var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json]: open /var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json: no such file or directory" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.327344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data" (OuterVolumeSpecName: "config-data") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.330291 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331740 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331854 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331924 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332407 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332514 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332579 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332755 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332858 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") on node \"crc\" " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.348926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf" (OuterVolumeSpecName: "server-conf") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.414924 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.415107 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091") on node "crc" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.435540 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.436132 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.481516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.539050 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.648563 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.674005 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.693856 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.694648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="setup-container" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694661 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="setup-container" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.694703 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694940 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.696200 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.700281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.700618 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701045 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701334 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvn4l" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701597 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701841 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.706250 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.714830 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750070 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853974 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.854053 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.854502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.859644 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.859748 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b4e22e9c66b4b9e31fc01977dfa2f505609dd5b0e95d61de241c54ade9d7a505/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.860134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.860642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.861612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.865541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.873005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.923587 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:46 crc kubenswrapper[4739]: I0218 14:25:46.039307 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:46 crc kubenswrapper[4739]: I0218 14:25:46.430498 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" path="/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes" Feb 18 14:25:48 crc kubenswrapper[4739]: E0218 14:25:48.265747 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:48 crc kubenswrapper[4739]: E0218 14:25:48.265814 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:48 crc kubenswrapper[4739]: I0218 14:25:48.285201 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Feb 18 14:25:49 crc kubenswrapper[4739]: I0218 14:25:49.410746 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:49 crc kubenswrapper[4739]: E0218 14:25:49.411266 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028763 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028835 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028975 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62h27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zq8vc_openstack(6e0a952f-ef12-46c6-8ca8-10f016b441be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.030173 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zq8vc" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.353340 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-zq8vc" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.656843 4739 scope.go:117] "RemoveContainer" containerID="a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.795988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899826 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899895 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900207 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.901178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.903811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.908040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.909373 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.914154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.919189 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.942765 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.955696 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz" (OuterVolumeSpecName: "kube-api-access-vbxbz") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "kube-api-access-vbxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.956013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.957836 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info" (OuterVolumeSpecName: "pod-info") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.965031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c" (OuterVolumeSpecName: "persistence") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.016998 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017037 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017059 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") on node \"crc\" " Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017071 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017081 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017091 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017188 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.021982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data" (OuterVolumeSpecName: "config-data") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.060713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf" (OuterVolumeSpecName: "server-conf") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.086170 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.086516 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c") on node "crc" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123651 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123694 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123707 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.176432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.176641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.225575 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.380395 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b"} Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.380604 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.432709 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.460938 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.474903 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.475553 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="setup-container" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475580 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="setup-container" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.475616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475625 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475946 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.478136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.501289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.636975 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: W0218 14:25:52.668173 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb44bafed_1808_41fc_b2bb_fcd2f1f02a17.slice/crio-330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6 WatchSource:0}: Error finding container 330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6: Status 404 returned error can't find the container with id 330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6 Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691106 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691157 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691282 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66ch94h6ch565h664h77h658hcch5b5h66ch86h5dfh85h5d6h576hd7hc4h544h587h649hb8h64ch86h5b9h597h677h59bhcch89h667h5b6h674q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kkdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.712851 4739 scope.go:117] "RemoveContainer" containerID="1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.740676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743403 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.744421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.745959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.747099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.749912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.758572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.764942 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.764996 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/42f2352e597643fb9091206ae40b48fcb025360f730dba5ba00ebee7f81842b7/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.776120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.852927 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.874588 4739 scope.go:117] "RemoveContainer" containerID="aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.101543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.247135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394418 4739 generic.go:334] "Generic (PLEG): container finished" podID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerID="540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098" exitCode=0 Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098"} Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerStarted","Data":"330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6"} Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.400531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"a0e467492a9d509677a9a2ce5bfb03daf177f33b7ad5e3a75510348a76449f90"} Feb 18 14:25:53 crc kubenswrapper[4739]: E0218 14:25:53.563686 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.640904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.425099 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" path="/var/lib/kubelet/pods/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b/volumes" Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"cff9d9bb2d51c4be81fd339ad6c53fe9f2e85c7e962f244b119134a8ef83ff99"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"aaab1c29ca5a9641b89b3702969fcc58211756abc60eeb4909036a0cbf64a830"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerStarted","Data":"44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.449540 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" podStartSLOduration=12.449523239 podStartE2EDuration="12.449523239s" podCreationTimestamp="2026-02-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:25:54.44159134 +0000 UTC m=+1586.937312272" watchObservedRunningTime="2026-02-18 14:25:54.449523239 +0000 UTC m=+1586.945244161" Feb 18 14:25:55 crc kubenswrapper[4739]: I0218 14:25:55.441878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0"} Feb 18 14:25:56 crc kubenswrapper[4739]: I0218 14:25:56.458414 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"a7cd61cee84e63df9331a9f85d1b2cfa167e94f3ff8dd7c7a78e021305137855"} Feb 18 14:25:56 crc kubenswrapper[4739]: I0218 14:25:56.461319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46"} Feb 18 14:25:59 crc kubenswrapper[4739]: E0218 14:25:59.893535 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:00 crc kubenswrapper[4739]: E0218 14:26:00.356888 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:00 crc kubenswrapper[4739]: I0218 14:26:00.522018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"9e20d9cae3babc8c64d126e1fd80af304a9f344aba078a57ae3836ac23fe1ccb"} Feb 18 14:26:00 crc kubenswrapper[4739]: I0218 14:26:00.523509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:26:00 crc kubenswrapper[4739]: E0218 14:26:00.524522 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:01 crc kubenswrapper[4739]: I0218 14:26:01.411013 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:01 crc kubenswrapper[4739]: E0218 14:26:01.411756 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:01 crc kubenswrapper[4739]: E0218 14:26:01.538660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.862649 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.919997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.920225 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" containerID="cri-o://6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" gracePeriod=10 Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.248841 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.253015 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.287573 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.479988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480128 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480230 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.491404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.496108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.497058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.499547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.500206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.520871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.539716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.568727 4739 generic.go:334] "Generic (PLEG): container finished" podID="107ff6da-f0af-471c-bfaf-08364992c44e" containerID="6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" exitCode=0 Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.568819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b"} Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.588464 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.589836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerStarted","Data":"03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862"} Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.607096 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zq8vc" podStartSLOduration=1.937988382 podStartE2EDuration="40.60707409s" podCreationTimestamp="2026-02-18 14:25:23 +0000 UTC" firstStartedPulling="2026-02-18 14:25:23.899414832 +0000 UTC m=+1556.395135754" lastFinishedPulling="2026-02-18 14:26:02.56850054 +0000 UTC m=+1595.064221462" observedRunningTime="2026-02-18 14:26:03.60586713 +0000 UTC m=+1596.101588052" watchObservedRunningTime="2026-02-18 14:26:03.60707409 +0000 UTC m=+1596.102795002" Feb 18 14:26:03 crc kubenswrapper[4739]: E0218 14:26:03.649928 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.860571 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991964 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.992012 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.992084 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.993068 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.011756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj" (OuterVolumeSpecName: "kube-api-access-bhzzj") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "kube-api-access-bhzzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.066412 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.070589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.071456 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config" (OuterVolumeSpecName: "config") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.079646 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098386 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098434 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098475 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098487 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098501 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.133876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.200360 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: W0218 14:26:04.264898 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703ba4cc_fc0d_4adf_bb13_62fecb68cff7.slice/crio-c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb WatchSource:0}: Error finding container c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb: Status 404 returned error can't find the container with id c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.267909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.623966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"de253019cab38f430ba5baf38246bca706fcc962369cf21cb7d0dd554226a189"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.624018 4739 scope.go:117] "RemoveContainer" containerID="6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.624324 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631135 4739 generic.go:334] "Generic (PLEG): container finished" podID="703ba4cc-fc0d-4adf-bb13-62fecb68cff7" containerID="98d1809038cf13a45c6ba78f1f6327a486ba6d0c214ebfd42b91cf4f479624a4" exitCode=0 Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerDied","Data":"98d1809038cf13a45c6ba78f1f6327a486ba6d0c214ebfd42b91cf4f479624a4"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerStarted","Data":"c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.666743 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.668924 4739 scope.go:117] "RemoveContainer" containerID="0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.692380 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.658836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerStarted","Data":"c5166190b91acc9de964e3359fd2d81b6451f09adb70bba097a96fc40c919a96"} Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.660977 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.668830 4739 generic.go:334] "Generic (PLEG): container finished" podID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerID="03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862" exitCode=0 Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.668930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerDied","Data":"03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862"} Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.699275 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" podStartSLOduration=2.699247429 podStartE2EDuration="2.699247429s" podCreationTimestamp="2026-02-18 14:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:05.694693254 +0000 UTC m=+1598.190414176" watchObservedRunningTime="2026-02-18 14:26:05.699247429 +0000 UTC m=+1598.194968361" Feb 18 14:26:06 crc kubenswrapper[4739]: I0218 14:26:06.423714 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" path="/var/lib/kubelet/pods/107ff6da-f0af-471c-bfaf-08364992c44e/volumes" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.197739 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285882 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.291832 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27" (OuterVolumeSpecName: "kube-api-access-62h27") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "kube-api-access-62h27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.340387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.374594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data" (OuterVolumeSpecName: "config-data") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394463 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394493 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394503 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerDied","Data":"254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a"} Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700135 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700583 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.713172 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.713990 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714007 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.714035 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714041 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.714056 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="init" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714061 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="init" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714260 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714292 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.715057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.729282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.775346 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.778843 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.794083 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.825801 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.827310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831953 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832167 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.854431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934263 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934465 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934482 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934560 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.941300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.942007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.942424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.943514 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.943899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.944967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.951306 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.954610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.957326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.957935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.035538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037187 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037262 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.042206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.047680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.059547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.121534 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.152703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.587706 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26539513_f274_471e_ad4a_10bcd4758458.slice/crio-20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29 WatchSource:0}: Error finding container 20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29: Status 404 returned error can't find the container with id 20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29 Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.594038 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.721377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5957545cb-6lrc2" event={"ID":"26539513-f274-471e-ad4a-10bcd4758458","Type":"ContainerStarted","Data":"20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29"} Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.728309 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c65abc8_9ca5_4a28_89d7_f5ffe23d1040.slice/crio-c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08 WatchSource:0}: Error finding container c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08: Status 404 returned error can't find the container with id c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08 Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.732108 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.883683 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.887197 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd1f6fa_009d_4942_98ad_203c31a7bf5b.slice/crio-5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1 WatchSource:0}: Error finding container 5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1: Status 404 returned error can't find the container with id 5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1 Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.733911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5957545cb-6lrc2" event={"ID":"26539513-f274-471e-ad4a-10bcd4758458","Type":"ContainerStarted","Data":"d606185937500eb2bee6a25d8b0ad1d7609bc85021a0104784b6ed19160a4d25"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.734041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.735810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfc6d5787-cxgnr" event={"ID":"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040","Type":"ContainerStarted","Data":"c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.740372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" event={"ID":"ecd1f6fa-009d-4942-98ad-203c31a7bf5b","Type":"ContainerStarted","Data":"5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.768892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5957545cb-6lrc2" podStartSLOduration=2.768869988 podStartE2EDuration="2.768869988s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:10.757916113 +0000 UTC m=+1603.253637045" watchObservedRunningTime="2026-02-18 14:26:10.768869988 +0000 UTC m=+1603.264590920" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.768879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfc6d5787-cxgnr" event={"ID":"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040","Type":"ContainerStarted","Data":"6ec1ba72394816ead088c0f4d2300b7976df7abc9aaed5c94058025f5a5abb8f"} Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.769485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.776195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" event={"ID":"ecd1f6fa-009d-4942-98ad-203c31a7bf5b","Type":"ContainerStarted","Data":"0d27966de4de938ab655c7b7bd9b35921570d1b746f3453221cfdd6cdaaea4ce"} Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.791768 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cfc6d5787-cxgnr" podStartSLOduration=2.205920109 podStartE2EDuration="3.791742813s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="2026-02-18 14:26:09.731074407 +0000 UTC m=+1602.226795329" lastFinishedPulling="2026-02-18 14:26:11.316897111 +0000 UTC m=+1603.812618033" observedRunningTime="2026-02-18 14:26:11.78489464 +0000 UTC m=+1604.280615572" watchObservedRunningTime="2026-02-18 14:26:11.791742813 +0000 UTC m=+1604.287463735" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.817947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" podStartSLOduration=2.391550832 podStartE2EDuration="3.817926342s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="2026-02-18 14:26:09.890569732 +0000 UTC m=+1602.386290654" lastFinishedPulling="2026-02-18 14:26:11.316945242 +0000 UTC m=+1603.812666164" observedRunningTime="2026-02-18 14:26:11.812081465 +0000 UTC m=+1604.307802417" watchObservedRunningTime="2026-02-18 14:26:11.817926342 +0000 UTC m=+1604.313647264" Feb 18 14:26:12 crc kubenswrapper[4739]: I0218 14:26:12.785158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.589647 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.659984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.660262 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" containerID="cri-o://44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" gracePeriod=10 Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.807951 4739 generic.go:334] "Generic (PLEG): container finished" podID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerID="44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" exitCode=0 Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.808151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d"} Feb 18 14:26:14 crc kubenswrapper[4739]: E0218 14:26:14.037934 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.219919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.314701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315433 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315544 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.329067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q" (OuterVolumeSpecName: "kube-api-access-zgn4q") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "kube-api-access-zgn4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.389185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config" (OuterVolumeSpecName: "config") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.408391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.410545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.410644 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:14 crc kubenswrapper[4739]: E0218 14:26:14.410905 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418918 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418960 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418998 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419012 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419022 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419032 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419040 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6"} Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822518 4739 scope.go:117] "RemoveContainer" containerID="44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.851188 4739 scope.go:117] "RemoveContainer" containerID="540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.856739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.870602 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:15 crc kubenswrapper[4739]: E0218 14:26:15.082844 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:15 crc kubenswrapper[4739]: I0218 14:26:15.427871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.426793 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" path="/var/lib/kubelet/pods/b44bafed-1808-41fc-b2bb-fcd2f1f02a17/volumes" Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.855137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.904505 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.667097418 podStartE2EDuration="44.904479658s" podCreationTimestamp="2026-02-18 14:25:32 +0000 UTC" firstStartedPulling="2026-02-18 14:25:33.373205771 +0000 UTC m=+1565.868926693" lastFinishedPulling="2026-02-18 14:26:15.610588011 +0000 UTC m=+1608.106308933" observedRunningTime="2026-02-18 14:26:16.878716399 +0000 UTC m=+1609.374437341" watchObservedRunningTime="2026-02-18 14:26:16.904479658 +0000 UTC m=+1609.400200580" Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.088216 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.162381 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.163759 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" containerID="cri-o://783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" gracePeriod=60 Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.602823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.677060 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.677281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-59f4cc7b48-2kzkr" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" containerID="cri-o://12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" gracePeriod=60 Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.778255 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.857142 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.857326 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" containerID="cri-o://35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" gracePeriod=60 Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.124939 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:21 crc kubenswrapper[4739]: E0218 14:26:21.125472 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125488 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: E0218 14:26:21.125522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="init" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125529 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="init" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125757 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.127537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.216613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302013 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.362736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.446336 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: W0218 14:26:21.955654 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96072604_db66_4bc5_98a7_c62c2d76eb40.slice/crio-1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70 WatchSource:0}: Error finding container 1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70: Status 404 returned error can't find the container with id 1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70 Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.958932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925203 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510" exitCode=0 Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510"} Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70"} Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.064664 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.075847 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.192944 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.195044 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.197664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.206174 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355758 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.356243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458717 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458791 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458979 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.459132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.465265 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.465987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.466677 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.478248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.515857 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.942773 4739 generic.go:334] "Generic (PLEG): container finished" podID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerID="12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" exitCode=0 Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.943034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerDied","Data":"12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.068807 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.340348 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408773 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.433724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.449574 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" path="/var/lib/kubelet/pods/1543620e-d684-4634-ba89-662f02f2b0e4/volumes" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.466255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z" (OuterVolumeSpecName: "kube-api-access-4br8z") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "kube-api-access-4br8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.519230 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.519582 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.535659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: E0218 14:26:24.558867 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-conmon-35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.599823 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.652098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.657338 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data" (OuterVolumeSpecName: "config-data") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659512 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659537 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659547 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.762072 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.861604 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.961962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerStarted","Data":"a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.964248 4739 generic.go:334] "Generic (PLEG): container finished" podID="418a2d42-e21e-4d0d-b295-3178e079431c" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" exitCode=0 Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965530 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965598 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965641 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.967269 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerDied","Data":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerDied","Data":"a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968475 4739 scope.go:117] "RemoveContainer" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.977178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerDied","Data":"182afb94ab91cf9899a4110a4be4e76e5c04c7d5630670036fcfd2f21cbc8a5f"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.977297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.978499 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h" (OuterVolumeSpecName: "kube-api-access-7hc2h") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "kube-api-access-7hc2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.981302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.031755 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.049357 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.056434 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068115 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data" (OuterVolumeSpecName: "config-data") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:25 crc kubenswrapper[4739]: W0218 14:26:25.068761 4739 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/418a2d42-e21e-4d0d-b295-3178e079431c/volumes/kubernetes.io~secret/config-data Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data" (OuterVolumeSpecName: "config-data") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070909 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070934 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070944 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070954 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.075615 4739 scope.go:117] "RemoveContainer" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:25 crc kubenswrapper[4739]: E0218 14:26:25.076263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": container with ID starting with 35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467 not found: ID does not exist" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.076380 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} err="failed to get container status \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": rpc error: code = NotFound desc = could not find container \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": container with ID starting with 35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467 not found: ID does not exist" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.076408 4739 scope.go:117] "RemoveContainer" containerID="12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.080919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.093560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.173057 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.173128 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.321168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.332060 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.999073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429"} Feb 18 14:26:26 crc kubenswrapper[4739]: I0218 14:26:26.428949 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" path="/var/lib/kubelet/pods/40d4949b-6d9f-425e-b02f-d8caa727ed99/volumes" Feb 18 14:26:26 crc kubenswrapper[4739]: I0218 14:26:26.429692 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" path="/var/lib/kubelet/pods/418a2d42-e21e-4d0d-b295-3178e079431c/volumes" Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.017420 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.020234 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.023417 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.023686 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:28 crc kubenswrapper[4739]: I0218 14:26:28.027627 4739 generic.go:334] "Generic (PLEG): container finished" podID="c71b6fb5-d59d-479d-b3fc-996d14bd93ed" containerID="9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0" exitCode=0 Feb 18 14:26:28 crc kubenswrapper[4739]: I0218 14:26:28.027720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerDied","Data":"9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.040994 4739 generic.go:334] "Generic (PLEG): container finished" podID="83da58fc-6d28-4a56-abc1-00267082c6b6" containerID="109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46" exitCode=0 Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.041078 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerDied","Data":"109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.045373 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429" exitCode=0 Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.045415 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.411595 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:29 crc kubenswrapper[4739]: E0218 14:26:29.411919 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:30 crc kubenswrapper[4739]: I0218 14:26:30.957936 4739 scope.go:117] "RemoveContainer" containerID="4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647" Feb 18 14:26:31 crc kubenswrapper[4739]: I0218 14:26:31.067028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"c4b04fa02b67b0be2421cd52f673e42500986256f3427c38976b1dc14f3dd2b4"} Feb 18 14:26:31 crc kubenswrapper[4739]: I0218 14:26:31.779711 4739 scope.go:117] "RemoveContainer" containerID="405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.084657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"94fb4b4e0ed1e4354cf0fd45d810ad5a001321ba13ecffee37c3fca4d8107def"} Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.084729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.085174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.115250 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.11522351 podStartE2EDuration="47.11522351s" podCreationTimestamp="2026-02-18 14:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:32.105369963 +0000 UTC m=+1624.601090895" watchObservedRunningTime="2026-02-18 14:26:32.11522351 +0000 UTC m=+1624.610944442" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.134484 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=40.134465462 podStartE2EDuration="40.134465462s" podCreationTimestamp="2026-02-18 14:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:32.126345068 +0000 UTC m=+1624.622065990" watchObservedRunningTime="2026-02-18 14:26:32.134465462 +0000 UTC m=+1624.630186404" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.534283 4739 scope.go:117] "RemoveContainer" containerID="17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.555063 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.612709 4739 scope.go:117] "RemoveContainer" containerID="7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.106424 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerStarted","Data":"ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843"} Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.130938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:33 crc kubenswrapper[4739]: E0218 14:26:33.131563 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: E0218 14:26:33.131597 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131606 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131828 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131855 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.132678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.134897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135101 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135134 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135470 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-k8bxr" podStartSLOduration=1.672601271 podStartE2EDuration="10.135453215s" podCreationTimestamp="2026-02-18 14:26:23 +0000 UTC" firstStartedPulling="2026-02-18 14:26:24.083029393 +0000 UTC m=+1616.578750315" lastFinishedPulling="2026-02-18 14:26:32.545881337 +0000 UTC m=+1625.041602259" observedRunningTime="2026-02-18 14:26:33.126608933 +0000 UTC m=+1625.622329865" watchObservedRunningTime="2026-02-18 14:26:33.135453215 +0000 UTC m=+1625.631174137" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.141492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.179770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.216902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.330558 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.331175 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.334912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.340656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.451701 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.137192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4"} Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.194355 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8nfdw" podStartSLOduration=3.062977521 podStartE2EDuration="13.194333229s" podCreationTimestamp="2026-02-18 14:26:21 +0000 UTC" firstStartedPulling="2026-02-18 14:26:22.926972455 +0000 UTC m=+1615.422693377" lastFinishedPulling="2026-02-18 14:26:33.058328163 +0000 UTC m=+1625.554049085" observedRunningTime="2026-02-18 14:26:34.168618675 +0000 UTC m=+1626.664339607" watchObservedRunningTime="2026-02-18 14:26:34.194333229 +0000 UTC m=+1626.690054151" Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.629798 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:35 crc kubenswrapper[4739]: I0218 14:26:35.166292 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerStarted","Data":"b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960"} Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.017713 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.023096 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.024752 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.024839 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.447325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.447899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.514015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:42 crc kubenswrapper[4739]: I0218 14:26:42.308866 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:42 crc kubenswrapper[4739]: I0218 14:26:42.364863 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:43 crc kubenswrapper[4739]: I0218 14:26:43.104776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="83da58fc-6d28-4a56-abc1-00267082c6b6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.16:5671: connect: connection refused" Feb 18 14:26:43 crc kubenswrapper[4739]: I0218 14:26:43.410551 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:43 crc kubenswrapper[4739]: E0218 14:26:43.410854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.275654 4739 generic.go:334] "Generic (PLEG): container finished" podID="9b3545e1-27f7-421f-9471-809d6b04706d" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" exitCode=0 Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.276198 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8nfdw" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" containerID="cri-o://bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" gracePeriod=2 Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.275869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerDied","Data":"783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b"} Feb 18 14:26:45 crc kubenswrapper[4739]: I0218 14:26:45.292199 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" exitCode=0 Feb 18 14:26:45 crc kubenswrapper[4739]: I0218 14:26:45.292248 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4"} Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.048231 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c71b6fb5-d59d-479d-b3fc-996d14bd93ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.15:5671: connect: connection refused" Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.305877 4739 generic.go:334] "Generic (PLEG): container finished" podID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerID="ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843" exitCode=0 Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.305922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerDied","Data":"ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843"} Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.015765 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017090 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017723 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017758 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.310076 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.411781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerDied","Data":"a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9"} Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.412033 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.412084 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.480949 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481064 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.500293 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6" (OuterVolumeSpecName: "kube-api-access-h4pw6") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "kube-api-access-h4pw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.522089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts" (OuterVolumeSpecName: "scripts") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.528959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data" (OuterVolumeSpecName: "config-data") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.560414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589399 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589426 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589437 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.593526 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.046779 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.060654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.214298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.214930 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215062 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.216887 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities" (OuterVolumeSpecName: "utilities") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.220092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.222131 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t" (OuterVolumeSpecName: "kube-api-access-njr9t") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "kube-api-access-njr9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.223325 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h" (OuterVolumeSpecName: "kube-api-access-v6j2h") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "kube-api-access-v6j2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.254705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.258247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.296930 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data" (OuterVolumeSpecName: "config-data") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317165 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317197 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317211 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317220 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317230 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317237 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317245 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.440707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.441314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70"} Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.441432 4739 scope.go:117] "RemoveContainer" containerID="bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.444491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerDied","Data":"34402e3be46581b4f11650c5f4f2ec4f1afe7d82b3230635fe9430959d1f9c69"} Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.444568 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.486639 4739 scope.go:117] "RemoveContainer" containerID="0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.491338 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.508583 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.522591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.526356 4739 scope.go:117] "RemoveContainer" containerID="759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.546329 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.590466 4739 scope.go:117] "RemoveContainer" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" Feb 18 14:26:51 crc kubenswrapper[4739]: I0218 14:26:51.456256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerStarted","Data":"39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742"} Feb 18 14:26:51 crc kubenswrapper[4739]: I0218 14:26:51.477652 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" podStartSLOduration=3.050248937 podStartE2EDuration="18.477633184s" podCreationTimestamp="2026-02-18 14:26:33 +0000 UTC" firstStartedPulling="2026-02-18 14:26:34.647036219 +0000 UTC m=+1627.142757141" lastFinishedPulling="2026-02-18 14:26:50.074420466 +0000 UTC m=+1642.570141388" observedRunningTime="2026-02-18 14:26:51.475410518 +0000 UTC m=+1643.971131450" watchObservedRunningTime="2026-02-18 14:26:51.477633184 +0000 UTC m=+1643.973354106" Feb 18 14:26:52 crc kubenswrapper[4739]: I0218 14:26:52.426094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" path="/var/lib/kubelet/pods/96072604-db66-4bc5-98a7-c62c2d76eb40/volumes" Feb 18 14:26:52 crc kubenswrapper[4739]: I0218 14:26:52.427389 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" path="/var/lib/kubelet/pods/9b3545e1-27f7-421f-9471-809d6b04706d/volumes" Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.103646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.186061 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.271696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272013 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" containerID="cri-o://ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272319 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" containerID="cri-o://d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272522 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" containerID="cri-o://0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" containerID="cri-o://5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" gracePeriod=30 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.549809 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" exitCode=0 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.550133 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" exitCode=0 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.549913 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.550178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} Feb 18 14:26:56 crc kubenswrapper[4739]: I0218 14:26:56.041607 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:26:58 crc kubenswrapper[4739]: I0218 14:26:58.344355 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" containerID="cri-o://86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" gracePeriod=604795 Feb 18 14:26:58 crc kubenswrapper[4739]: I0218 14:26:58.423818 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:58 crc kubenswrapper[4739]: E0218 14:26:58.424379 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.235323 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336414 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.350740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc" (OuterVolumeSpecName: "kube-api-access-6bhhc") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "kube-api-access-6bhhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.357610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts" (OuterVolumeSpecName: "scripts") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.443584 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.443675 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.461598 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.497011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.546595 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.548632 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.558409 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.560259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data" (OuterVolumeSpecName: "config-data") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607041 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" exitCode=0 Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607347 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" exitCode=0 Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607134 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607490 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607503 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607519 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.650777 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.650811 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.703143 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.719193 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.730216 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745095 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745711 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745732 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745750 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745757 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745771 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745780 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745797 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745804 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745817 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-utilities" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745825 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-utilities" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745836 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745843 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745861 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745868 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745879 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745884 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745899 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-content" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745905 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-content" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746135 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746154 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746165 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746178 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746199 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746213 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.748308 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754351 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.758815 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.763046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.766833 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.773729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.838691 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860304 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.906849 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.908598 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908640 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} err="failed to get container status \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908671 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.908948 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908985 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} err="failed to get container status \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909000 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.909643 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909697 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} err="failed to get container status \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909723 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.910602 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.910642 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} err="failed to get container status \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.910669 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.912608 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} err="failed to get container status \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.912633 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913030 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} err="failed to get container status \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913070 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913339 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} err="failed to get container status \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913361 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913605 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} err="failed to get container status \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962505 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.963096 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.968615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.968972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.969368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.971264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.992246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.995153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.073173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.436394 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" path="/var/lib/kubelet/pods/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e/volumes" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.702154 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:27:01 crc kubenswrapper[4739]: I0218 14:27:01.632342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"403176e967830085678bd1996fdf275eb9c32a06ad422547dc22e7325fbbc439"} Feb 18 14:27:01 crc kubenswrapper[4739]: I0218 14:27:01.632969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"2cb412dbed0abede87146ec8c9c134ce1a5e53ebf02003796582c9c6e0b8dbe0"} Feb 18 14:27:02 crc kubenswrapper[4739]: I0218 14:27:02.644151 4739 generic.go:334] "Generic (PLEG): container finished" podID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerID="39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742" exitCode=0 Feb 18 14:27:02 crc kubenswrapper[4739]: I0218 14:27:02.644238 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerDied","Data":"39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742"} Feb 18 14:27:03 crc kubenswrapper[4739]: I0218 14:27:03.114810 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:27:03 crc kubenswrapper[4739]: I0218 14:27:03.657296 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"724a4db8cc6bc3613b9d2a784ef25e89105612b429c964224a1f1664c08d1bfd"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.670439 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerDied","Data":"b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.670969 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.673857 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerID="86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" exitCode=0 Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.673898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.764034 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814873 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814956 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.815076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.823707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.824649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj" (OuterVolumeSpecName: "kube-api-access-fqkdj") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "kube-api-access-fqkdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.863015 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.868200 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory" (OuterVolumeSpecName: "inventory") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.918988 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919038 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919052 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919066 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.035499 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.687274 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690514 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8"} Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690639 4739 scope.go:117] "RemoveContainer" containerID="86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.856429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857144 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="setup-container" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="setup-container" Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857185 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857483 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857519 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.858524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862245 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862985 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.864201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.868776 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.994985 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995103 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995354 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995472 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995626 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.997959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:05.999595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:05.999962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.003109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.004616 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx" (OuterVolumeSpecName: "kube-api-access-h92gx") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "kube-api-access-h92gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.008266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.012699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.015460 4739 scope.go:117] "RemoveContainer" containerID="a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.059340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data" (OuterVolumeSpecName: "config-data") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099612 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099623 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099633 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099641 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099648 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099656 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099664 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099672 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099879 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.103737 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.106485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.121229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.123493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952" (OuterVolumeSpecName: "persistence") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "pvc-23b37086-b6fd-42dd-960e-d907e6689952". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.183368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.205233 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") on node \"crc\" " Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.205269 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.260600 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.260765 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-23b37086-b6fd-42dd-960e-d907e6689952" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952") on node "crc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.290349 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.307066 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.307104 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.380499 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.398888 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.445057 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" path="/var/lib/kubelet/pods/a5594aaa-fab3-4dad-b79e-17200bc2f1ee/volumes" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.445984 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.450404 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.450511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524509 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524649 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524703 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524732 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627561 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.629682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630158 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630520 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630553 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1542ad1e95f6d05e9b33a4f8791d4ee2fe2b5bce9c9209ea9b163f0535bf4310/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.632583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.632691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.633948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.634110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.636399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.643921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.650215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.742101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"50a4a18ff9bb9857e30195f19a2bdc7011b61567cda085dd45f53667910cdcdf"} Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.744071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.781087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.846486 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:06 crc kubenswrapper[4739]: W0218 14:27:06.853708 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2cd97a_cec6_45bc_a08c_b179dc0f72d6.slice/crio-8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7 WatchSource:0}: Error finding container 8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7: Status 404 returned error can't find the container with id 8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7 Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.416864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:07 crc kubenswrapper[4739]: W0218 14:27:07.701979 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde0100ca_60e4_40d3_afeb_f5da9513fdc1.slice/crio-d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911 WatchSource:0}: Error finding container d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911: Status 404 returned error can't find the container with id d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911 Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.762928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerStarted","Data":"8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7"} Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.764435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.776729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"e9e866df911b5c966b3e84d602e9af4840ac454bf3dae29b5821cd170b689e34"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.778405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerStarted","Data":"165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.827892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.748463634 podStartE2EDuration="9.827869245s" podCreationTimestamp="2026-02-18 14:26:59 +0000 UTC" firstStartedPulling="2026-02-18 14:27:00.709350717 +0000 UTC m=+1653.205071639" lastFinishedPulling="2026-02-18 14:27:07.788756328 +0000 UTC m=+1660.284477250" observedRunningTime="2026-02-18 14:27:08.798574562 +0000 UTC m=+1661.294295494" watchObservedRunningTime="2026-02-18 14:27:08.827869245 +0000 UTC m=+1661.323590167" Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.842999 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" podStartSLOduration=2.921266056 podStartE2EDuration="3.842982084s" podCreationTimestamp="2026-02-18 14:27:05 +0000 UTC" firstStartedPulling="2026-02-18 14:27:06.868308721 +0000 UTC m=+1659.364029643" lastFinishedPulling="2026-02-18 14:27:07.790024749 +0000 UTC m=+1660.285745671" observedRunningTime="2026-02-18 14:27:08.824557662 +0000 UTC m=+1661.320278584" watchObservedRunningTime="2026-02-18 14:27:08.842982084 +0000 UTC m=+1661.338703006" Feb 18 14:27:09 crc kubenswrapper[4739]: I0218 14:27:09.790880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c"} Feb 18 14:27:11 crc kubenswrapper[4739]: I0218 14:27:11.813054 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerID="165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685" exitCode=0 Feb 18 14:27:11 crc kubenswrapper[4739]: I0218 14:27:11.813153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerDied","Data":"165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685"} Feb 18 14:27:12 crc kubenswrapper[4739]: I0218 14:27:12.417485 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:12 crc kubenswrapper[4739]: E0218 14:27:12.418018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.327115 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.417304 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx" (OuterVolumeSpecName: "kube-api-access-5r4rx") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "kube-api-access-5r4rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.447167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.447574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory" (OuterVolumeSpecName: "inventory") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518348 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518409 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518421 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.838706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerDied","Data":"8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7"} Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.839028 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.838864 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.935573 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:13 crc kubenswrapper[4739]: E0218 14:27:13.936135 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.936152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.936488 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.937303 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939671 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939572 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.963782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.031765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.135141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.139756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.140233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.152466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.162104 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.265061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: W0218 14:27:14.841323 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a6af44_5f38_4ac7_a370_74b190762136.slice/crio-1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3 WatchSource:0}: Error finding container 1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3: Status 404 returned error can't find the container with id 1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3 Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.842996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.867321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerStarted","Data":"693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d"} Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.867940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerStarted","Data":"1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3"} Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.897411 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" podStartSLOduration=2.495766389 podStartE2EDuration="2.897394149s" podCreationTimestamp="2026-02-18 14:27:13 +0000 UTC" firstStartedPulling="2026-02-18 14:27:14.843662385 +0000 UTC m=+1667.339383307" lastFinishedPulling="2026-02-18 14:27:15.245290145 +0000 UTC m=+1667.741011067" observedRunningTime="2026-02-18 14:27:15.884793473 +0000 UTC m=+1668.380514415" watchObservedRunningTime="2026-02-18 14:27:15.897394149 +0000 UTC m=+1668.393115071" Feb 18 14:27:27 crc kubenswrapper[4739]: I0218 14:27:27.410429 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:27 crc kubenswrapper[4739]: E0218 14:27:27.411318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:33 crc kubenswrapper[4739]: I0218 14:27:33.236024 4739 scope.go:117] "RemoveContainer" containerID="0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40" Feb 18 14:27:33 crc kubenswrapper[4739]: I0218 14:27:33.296390 4739 scope.go:117] "RemoveContainer" containerID="cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04" Feb 18 14:27:41 crc kubenswrapper[4739]: I0218 14:27:41.412151 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:41 crc kubenswrapper[4739]: E0218 14:27:41.412889 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:42 crc kubenswrapper[4739]: I0218 14:27:42.157286 4739 generic.go:334] "Generic (PLEG): container finished" podID="de0100ca-60e4-40d3-afeb-f5da9513fdc1" containerID="6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c" exitCode=0 Feb 18 14:27:42 crc kubenswrapper[4739]: I0218 14:27:42.157370 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerDied","Data":"6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c"} Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.169676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"8b34ba5d73f3b358eb72273b94ce8f47208dc2fb18816f449b33e731312474a3"} Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.170202 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.204516 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.204498239 podStartE2EDuration="37.204498239s" podCreationTimestamp="2026-02-18 14:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:27:43.198658763 +0000 UTC m=+1695.694379695" watchObservedRunningTime="2026-02-18 14:27:43.204498239 +0000 UTC m=+1695.700219161" Feb 18 14:27:54 crc kubenswrapper[4739]: I0218 14:27:54.411260 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:54 crc kubenswrapper[4739]: E0218 14:27:54.412737 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:56 crc kubenswrapper[4739]: I0218 14:27:56.785657 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 14:27:56 crc kubenswrapper[4739]: I0218 14:27:56.861070 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:01 crc kubenswrapper[4739]: I0218 14:28:01.642289 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" containerID="cri-o://9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" gracePeriod=604796 Feb 18 14:28:03 crc kubenswrapper[4739]: I0218 14:28:03.091019 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:28:06 crc kubenswrapper[4739]: I0218 14:28:06.411542 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:06 crc kubenswrapper[4739]: E0218 14:28:06.412162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.407702 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498409 4739 generic.go:334] "Generic (PLEG): container finished" podID="70500a97-2717-4761-884a-25cf8ab89380" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" exitCode=0 Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec"} Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498514 4739 scope.go:117] "RemoveContainer" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498678 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529528 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530680 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530753 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.531269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.531685 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.532810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.534545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.538733 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd" (OuterVolumeSpecName: "kube-api-access-xqscd") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "kube-api-access-xqscd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info" (OuterVolumeSpecName: "pod-info") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554087 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.571546 4739 scope.go:117] "RemoveContainer" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.594053 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data" (OuterVolumeSpecName: "config-data") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.594865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd" (OuterVolumeSpecName: "persistence") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642069 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642079 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642108 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") on node \"crc\" " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642118 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642130 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642142 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642151 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.647185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf" (OuterVolumeSpecName: "server-conf") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.695747 4739 scope.go:117] "RemoveContainer" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.696254 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": container with ID starting with 9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef not found: ID does not exist" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.696299 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} err="failed to get container status \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": rpc error: code = NotFound desc = could not find container \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": container with ID starting with 9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef not found: ID does not exist" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.696329 4739 scope.go:117] "RemoveContainer" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.699618 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": container with ID starting with 50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886 not found: ID does not exist" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.699701 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} err="failed to get container status \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": rpc error: code = NotFound desc = could not find container \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": container with ID starting with 50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886 not found: ID does not exist" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.712134 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.712296 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd") on node "crc" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.746808 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.746870 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.754147 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.842214 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.849792 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.855383 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.873654 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.874280 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="setup-container" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874300 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="setup-container" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.874331 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874340 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874657 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.876280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.921848 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952468 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952507 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.057986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066422 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066480 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e4a135f402bfdd87a0dd9dc00d6afd10d61dd6559041546aff07ddf4aa84ac2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.069212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.076957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.079368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.079973 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.093839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.094320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.097754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.105395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.135385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.403627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.589584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.115595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.422109 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70500a97-2717-4761-884a-25cf8ab89380" path="/var/lib/kubelet/pods/70500a97-2717-4761-884a-25cf8ab89380/volumes" Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.530223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"a90863a928903c3aac9369cd5894ed94762e95ec15acbb20ff0c0a3eebfb3eb1"} Feb 18 14:28:12 crc kubenswrapper[4739]: I0218 14:28:12.555031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1"} Feb 18 14:28:18 crc kubenswrapper[4739]: I0218 14:28:18.426626 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:18 crc kubenswrapper[4739]: E0218 14:28:18.427732 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.410304 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:33 crc kubenswrapper[4739]: E0218 14:28:33.411198 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.496476 4739 scope.go:117] "RemoveContainer" containerID="f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.523111 4739 scope.go:117] "RemoveContainer" containerID="51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" Feb 18 14:28:43 crc kubenswrapper[4739]: I0218 14:28:43.913178 4739 generic.go:334] "Generic (PLEG): container finished" podID="bd925294-7441-4ba8-af23-290ef19deb9b" containerID="1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1" exitCode=0 Feb 18 14:28:43 crc kubenswrapper[4739]: I0218 14:28:43.913437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerDied","Data":"1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1"} Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.927118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"e3d729500906cf43dc6d40a9f3c8718a85d4049bcf52d0fc7ee100523b3b2d83"} Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.927920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.966496 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.966446251 podStartE2EDuration="36.966446251s" podCreationTimestamp="2026-02-18 14:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:28:44.947307167 +0000 UTC m=+1757.443028109" watchObservedRunningTime="2026-02-18 14:28:44.966446251 +0000 UTC m=+1757.462167173" Feb 18 14:28:47 crc kubenswrapper[4739]: I0218 14:28:47.410805 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:47 crc kubenswrapper[4739]: E0218 14:28:47.411723 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:59 crc kubenswrapper[4739]: I0218 14:28:59.593624 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 14:29:01 crc kubenswrapper[4739]: I0218 14:29:01.410926 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:01 crc kubenswrapper[4739]: E0218 14:29:01.411540 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:29:14 crc kubenswrapper[4739]: I0218 14:29:14.410434 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:14 crc kubenswrapper[4739]: E0218 14:29:14.411137 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:29:29 crc kubenswrapper[4739]: I0218 14:29:29.410941 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:30 crc kubenswrapper[4739]: I0218 14:29:30.445844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.051907 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.066402 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.081689 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.093206 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.104328 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.115159 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.426279 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" path="/var/lib/kubelet/pods/075a587a-4bf2-43e9-8c63-1357e9cb05c9/volumes" Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.428181 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" path="/var/lib/kubelet/pods/b08bf9ca-ebbc-4d72-b227-20a5c7eed529/volumes" Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.429494 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" path="/var/lib/kubelet/pods/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66/volumes" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.061515 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.078654 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.090892 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.102702 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.610736 4739 scope.go:117] "RemoveContainer" containerID="68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.635868 4739 scope.go:117] "RemoveContainer" containerID="4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.664763 4739 scope.go:117] "RemoveContainer" containerID="a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.696466 4739 scope.go:117] "RemoveContainer" containerID="0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.779218 4739 scope.go:117] "RemoveContainer" containerID="0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.809399 4739 scope.go:117] "RemoveContainer" containerID="cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.872891 4739 scope.go:117] "RemoveContainer" containerID="3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.032742 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.047646 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.057739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.068882 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.079243 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.091507 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.424814 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" path="/var/lib/kubelet/pods/0275833c-ab0c-4865-9c6e-5c8d54a5e238/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.425946 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" path="/var/lib/kubelet/pods/8e4c634d-6e65-4f6b-8001-0ac3e35a4801/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.426585 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" path="/var/lib/kubelet/pods/c50e4a24-ad83-4694-be4d-6b0811726c3d/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.427192 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" path="/var/lib/kubelet/pods/e1637477-36b3-4dea-b260-15b6e2532af8/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.428859 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" path="/var/lib/kubelet/pods/f8c94ce9-7b1b-43bd-9c93-303d0e675809/volumes" Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.069543 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.085244 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.098974 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.110805 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.428178 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4689ea28-dac4-434f-af87-18d6fc903330" path="/var/lib/kubelet/pods/4689ea28-dac4-434f-af87-18d6fc903330/volumes" Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.429165 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" path="/var/lib/kubelet/pods/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff/volumes" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.198176 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.202725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.207139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.207826 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.273880 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.383799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.383911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.384009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.485749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.486038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.486224 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.487591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.492293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.502838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.545170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.074388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.816322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerStarted","Data":"9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427"} Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.816657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerStarted","Data":"5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5"} Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.836781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" podStartSLOduration=1.836764562 podStartE2EDuration="1.836764562s" podCreationTimestamp="2026-02-18 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:30:01.830422508 +0000 UTC m=+1834.326143440" watchObservedRunningTime="2026-02-18 14:30:01.836764562 +0000 UTC m=+1834.332485504" Feb 18 14:30:02 crc kubenswrapper[4739]: I0218 14:30:02.829882 4739 generic.go:334] "Generic (PLEG): container finished" podID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerID="9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427" exitCode=0 Feb 18 14:30:02 crc kubenswrapper[4739]: I0218 14:30:02.830972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerDied","Data":"9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427"} Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.313954 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399399 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.401774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume" (OuterVolumeSpecName: "config-volume") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.407420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.407475 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl" (OuterVolumeSpecName: "kube-api-access-x67hl") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "kube-api-access-x67hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502512 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502548 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.855845 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerDied","Data":"5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5"} Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.856166 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.855925 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:09 crc kubenswrapper[4739]: I0218 14:30:09.051302 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:30:09 crc kubenswrapper[4739]: I0218 14:30:09.064222 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:30:10 crc kubenswrapper[4739]: I0218 14:30:10.423551 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" path="/var/lib/kubelet/pods/f1df0b15-6927-4300-b034-6b5c3308320d/volumes" Feb 18 14:30:20 crc kubenswrapper[4739]: I0218 14:30:20.081199 4739 generic.go:334] "Generic (PLEG): container finished" podID="64a6af44-5f38-4ac7-a370-74b190762136" containerID="693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d" exitCode=0 Feb 18 14:30:20 crc kubenswrapper[4739]: I0218 14:30:20.081319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerDied","Data":"693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d"} Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.048602 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.059206 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.069207 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.085190 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.594289 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736184 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736412 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.743798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl" (OuterVolumeSpecName: "kube-api-access-htwsl") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "kube-api-access-htwsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.744708 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.777022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.785867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory" (OuterVolumeSpecName: "inventory") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841158 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841394 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841411 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841425 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.034047 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.053097 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.067814 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.084012 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.103138 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerDied","Data":"1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3"} Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107353 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107392 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.120847 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.136371 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.152273 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.168377 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.184266 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.199200 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.229739 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.250532 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:22 crc kubenswrapper[4739]: E0218 14:30:22.251129 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251153 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: E0218 14:30:22.251182 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251191 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251408 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251436 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.252324 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255084 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255141 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.261532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.430791 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" path="/var/lib/kubelet/pods/20e0fc8a-5942-417e-9fbb-4f94536db193/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.437293 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" path="/var/lib/kubelet/pods/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.440327 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" path="/var/lib/kubelet/pods/2c90e24b-98c5-4e26-8819-a5ae1aef1102/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.443772 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" path="/var/lib/kubelet/pods/39bd8e39-8e54-46e1-8217-dbdd74be8a8c/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.447641 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" path="/var/lib/kubelet/pods/4d208990-8bd6-4b82-bba8-200f5c7985d0/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.450200 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" path="/var/lib/kubelet/pods/4e60ca77-b621-4dfc-8b92-89d8cad06bf0/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.452746 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" path="/var/lib/kubelet/pods/da457314-f1eb-477e-93c7-cf0d01e0f1e1/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.454642 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" path="/var/lib/kubelet/pods/f06df363-1196-4ba5-a5ba-d6e6c419a9d2/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465098 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465181 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.471165 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.473530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.485247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.577080 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:23 crc kubenswrapper[4739]: I0218 14:30:23.194913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:23 crc kubenswrapper[4739]: I0218 14:30:23.199541 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:30:24 crc kubenswrapper[4739]: I0218 14:30:24.145291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerStarted","Data":"82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa"} Feb 18 14:30:25 crc kubenswrapper[4739]: I0218 14:30:25.167930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerStarted","Data":"e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3"} Feb 18 14:30:25 crc kubenswrapper[4739]: I0218 14:30:25.184353 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" podStartSLOduration=1.950107111 podStartE2EDuration="3.18433797s" podCreationTimestamp="2026-02-18 14:30:22 +0000 UTC" firstStartedPulling="2026-02-18 14:30:23.199227704 +0000 UTC m=+1855.694948626" lastFinishedPulling="2026-02-18 14:30:24.433458563 +0000 UTC m=+1856.929179485" observedRunningTime="2026-02-18 14:30:25.181838082 +0000 UTC m=+1857.677559004" watchObservedRunningTime="2026-02-18 14:30:25.18433797 +0000 UTC m=+1857.680058892" Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.048585 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.065397 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.428273 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" path="/var/lib/kubelet/pods/edf3454e-4ac2-42a7-98b1-0f43065764c2/volumes" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.039216 4739 scope.go:117] "RemoveContainer" containerID="03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.074497 4739 scope.go:117] "RemoveContainer" containerID="f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.164021 4739 scope.go:117] "RemoveContainer" containerID="040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.210646 4739 scope.go:117] "RemoveContainer" containerID="0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.287159 4739 scope.go:117] "RemoveContainer" containerID="76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.329138 4739 scope.go:117] "RemoveContainer" containerID="0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.387389 4739 scope.go:117] "RemoveContainer" containerID="b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.415200 4739 scope.go:117] "RemoveContainer" containerID="983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.438939 4739 scope.go:117] "RemoveContainer" containerID="6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.462641 4739 scope.go:117] "RemoveContainer" containerID="a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.488909 4739 scope.go:117] "RemoveContainer" containerID="e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.518986 4739 scope.go:117] "RemoveContainer" containerID="52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.549488 4739 scope.go:117] "RemoveContainer" containerID="06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.571743 4739 scope.go:117] "RemoveContainer" containerID="2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.610504 4739 scope.go:117] "RemoveContainer" containerID="b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.649379 4739 scope.go:117] "RemoveContainer" containerID="6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.710088 4739 scope.go:117] "RemoveContainer" containerID="aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.730417 4739 scope.go:117] "RemoveContainer" containerID="fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41" Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.071620 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.099679 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.422657 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" path="/var/lib/kubelet/pods/dbeb37ff-68ee-4cc5-add5-18fc25605b6f/volumes" Feb 18 14:31:29 crc kubenswrapper[4739]: I0218 14:31:29.373349 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:31:29 crc kubenswrapper[4739]: I0218 14:31:29.374380 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.060666 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.072158 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.424727 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" path="/var/lib/kubelet/pods/0c42d996-bf46-4e69-892f-c720a9bce282/volumes" Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.033729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.046935 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.423373 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" path="/var/lib/kubelet/pods/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc/volumes" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.177712 4739 scope.go:117] "RemoveContainer" containerID="d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.211821 4739 scope.go:117] "RemoveContainer" containerID="2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.257428 4739 scope.go:117] "RemoveContainer" containerID="eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.278748 4739 scope.go:117] "RemoveContainer" containerID="6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.338860 4739 scope.go:117] "RemoveContainer" containerID="008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.391511 4739 scope.go:117] "RemoveContainer" containerID="331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385" Feb 18 14:31:39 crc kubenswrapper[4739]: I0218 14:31:39.084242 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:31:39 crc kubenswrapper[4739]: I0218 14:31:39.096738 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:31:40 crc kubenswrapper[4739]: I0218 14:31:40.427960 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" path="/var/lib/kubelet/pods/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8/volumes" Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.053802 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.066734 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.082010 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.094726 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:31:44 crc kubenswrapper[4739]: I0218 14:31:44.423889 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d77527-a940-4423-ac63-4a7cdf366510" path="/var/lib/kubelet/pods/51d77527-a940-4423-ac63-4a7cdf366510/volumes" Feb 18 14:31:44 crc kubenswrapper[4739]: I0218 14:31:44.425870 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3697715-3f94-4086-99ab-65a492bd7542" path="/var/lib/kubelet/pods/b3697715-3f94-4086-99ab-65a492bd7542/volumes" Feb 18 14:31:59 crc kubenswrapper[4739]: I0218 14:31:59.372838 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:31:59 crc kubenswrapper[4739]: I0218 14:31:59.373424 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373297 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373911 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373956 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.375274 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.375352 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" gracePeriod=600 Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875334 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" exitCode=0 Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875728 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:32:30 crc kubenswrapper[4739]: I0218 14:32:30.888498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.567974 4739 scope.go:117] "RemoveContainer" containerID="615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5" Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.613356 4739 scope.go:117] "RemoveContainer" containerID="13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2" Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.684362 4739 scope.go:117] "RemoveContainer" containerID="d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4" Feb 18 14:32:37 crc kubenswrapper[4739]: I0218 14:32:37.961474 4739 generic.go:334] "Generic (PLEG): container finished" podID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerID="e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3" exitCode=0 Feb 18 14:32:37 crc kubenswrapper[4739]: I0218 14:32:37.961581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerDied","Data":"e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3"} Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.479922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.668911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.668993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.669212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.682805 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd" (OuterVolumeSpecName: "kube-api-access-lfhkd") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "kube-api-access-lfhkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.699975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.709983 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory" (OuterVolumeSpecName: "inventory") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774087 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774121 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774131 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.989964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerDied","Data":"82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa"} Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.990027 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.990217 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.088567 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:40 crc kubenswrapper[4739]: E0218 14:32:40.089182 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.089212 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.089560 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.090596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093490 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093969 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.096760 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.105984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194424 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194820 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.301368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.313804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.319690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.420904 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.962481 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:41 crc kubenswrapper[4739]: I0218 14:32:41.000682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerStarted","Data":"fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa"} Feb 18 14:32:42 crc kubenswrapper[4739]: I0218 14:32:42.013471 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerStarted","Data":"89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0"} Feb 18 14:32:42 crc kubenswrapper[4739]: I0218 14:32:42.031315 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" podStartSLOduration=1.491547416 podStartE2EDuration="2.031293874s" podCreationTimestamp="2026-02-18 14:32:40 +0000 UTC" firstStartedPulling="2026-02-18 14:32:40.967316273 +0000 UTC m=+1993.463037195" lastFinishedPulling="2026-02-18 14:32:41.507062731 +0000 UTC m=+1994.002783653" observedRunningTime="2026-02-18 14:32:42.027501235 +0000 UTC m=+1994.523222177" watchObservedRunningTime="2026-02-18 14:32:42.031293874 +0000 UTC m=+1994.527014796" Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.044833 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.057976 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.423705 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" path="/var/lib/kubelet/pods/290b50b0-4283-4a40-b694-4a5f18b39b1a/volumes" Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.037629 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.052992 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.065932 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.076662 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.089104 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.114695 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.128223 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.139113 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.149635 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.158271 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.425268 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" path="/var/lib/kubelet/pods/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.426516 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f229688-5021-4d28-9109-98071744a102" path="/var/lib/kubelet/pods/1f229688-5021-4d28-9109-98071744a102/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.430010 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" path="/var/lib/kubelet/pods/429115da-eb66-4dc9-9210-86cd0525a6cf/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.430702 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" path="/var/lib/kubelet/pods/c33399d1-a28e-4e19-aba8-a218018e5e8b/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.431248 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" path="/var/lib/kubelet/pods/f689babc-92f9-4e45-8fb3-40722e18cd10/volumes" Feb 18 14:33:33 crc kubenswrapper[4739]: I0218 14:33:33.050379 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:33:33 crc kubenswrapper[4739]: I0218 14:33:33.062447 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:33:34 crc kubenswrapper[4739]: I0218 14:33:34.445547 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" path="/var/lib/kubelet/pods/2ed7afcd-a9be-4c59-836d-355e4c502a01/volumes" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.840430 4739 scope.go:117] "RemoveContainer" containerID="cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.887084 4739 scope.go:117] "RemoveContainer" containerID="d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.931920 4739 scope.go:117] "RemoveContainer" containerID="f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.011077 4739 scope.go:117] "RemoveContainer" containerID="c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.128885 4739 scope.go:117] "RemoveContainer" containerID="f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.191617 4739 scope.go:117] "RemoveContainer" containerID="7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.240001 4739 scope.go:117] "RemoveContainer" containerID="164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f" Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.045193 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.055722 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.428059 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" path="/var/lib/kubelet/pods/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa/volumes" Feb 18 14:33:45 crc kubenswrapper[4739]: I0218 14:33:45.034670 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:33:45 crc kubenswrapper[4739]: I0218 14:33:45.044869 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:33:46 crc kubenswrapper[4739]: I0218 14:33:46.441345 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" path="/var/lib/kubelet/pods/4445c84e-2108-44e0-a46e-673fe0858df3/volumes" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.461881 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.466543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.478186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.685839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.685949 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686336 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686603 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.705251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.788915 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.330538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.732187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea"} Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.732227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"b92690c72462eba244e27a6cbf4928687a786ba839112f5863cebe7a7538bd7c"} Feb 18 14:33:51 crc kubenswrapper[4739]: I0218 14:33:51.745227 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerID="444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea" exitCode=0 Feb 18 14:33:51 crc kubenswrapper[4739]: I0218 14:33:51.745318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerDied","Data":"444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea"} Feb 18 14:33:54 crc kubenswrapper[4739]: I0218 14:33:54.789222 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerID="89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0" exitCode=0 Feb 18 14:33:54 crc kubenswrapper[4739]: I0218 14:33:54.789318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerDied","Data":"89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0"} Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.032830 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.048682 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.428370 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" path="/var/lib/kubelet/pods/5f44227f-28d1-4aaf-9133-c4560b893022/volumes" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.209716 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.301615 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.302100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.310163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n" (OuterVolumeSpecName: "kube-api-access-kf45n") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "kube-api-access-kf45n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.364250 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory" (OuterVolumeSpecName: "inventory") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403888 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403905 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.444692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.505424 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerDied","Data":"fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa"} Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855885 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.858562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33"} Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.362637 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:01 crc kubenswrapper[4739]: E0218 14:34:01.363408 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.363429 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.363893 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.365285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370588 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.371279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.372107 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.531650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.532339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.532577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.633766 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.633862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.634755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.647198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.648194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.654136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.732217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.299537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:02 crc kubenswrapper[4739]: W0218 14:34:02.331300 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884f40e4_492b_4f73_94a7_8be81bde150e.slice/crio-cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77 WatchSource:0}: Error finding container cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77: Status 404 returned error can't find the container with id cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77 Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.886015 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerID="ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33" exitCode=0 Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.886093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerDied","Data":"ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33"} Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.889519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerStarted","Data":"cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77"} Feb 18 14:34:03 crc kubenswrapper[4739]: I0218 14:34:03.901383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerStarted","Data":"18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c"} Feb 18 14:34:03 crc kubenswrapper[4739]: I0218 14:34:03.930849 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" podStartSLOduration=2.525927796 podStartE2EDuration="2.930828365s" podCreationTimestamp="2026-02-18 14:34:01 +0000 UTC" firstStartedPulling="2026-02-18 14:34:02.335108377 +0000 UTC m=+2074.830829299" lastFinishedPulling="2026-02-18 14:34:02.740008946 +0000 UTC m=+2075.235729868" observedRunningTime="2026-02-18 14:34:03.918645544 +0000 UTC m=+2076.414366476" watchObservedRunningTime="2026-02-18 14:34:03.930828365 +0000 UTC m=+2076.426549307" Feb 18 14:34:04 crc kubenswrapper[4739]: I0218 14:34:04.922595 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"d023c80406c03a6201ba40309856fb155c13a9f51b1123ea61496bb3dca72e55"} Feb 18 14:34:04 crc kubenswrapper[4739]: I0218 14:34:04.961947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hvzqm" podStartSLOduration=2.750166928 podStartE2EDuration="15.961927353s" podCreationTimestamp="2026-02-18 14:33:49 +0000 UTC" firstStartedPulling="2026-02-18 14:33:50.734292898 +0000 UTC m=+2063.230013820" lastFinishedPulling="2026-02-18 14:34:03.946053313 +0000 UTC m=+2076.441774245" observedRunningTime="2026-02-18 14:34:04.954541165 +0000 UTC m=+2077.450262087" watchObservedRunningTime="2026-02-18 14:34:04.961927353 +0000 UTC m=+2077.457648275" Feb 18 14:34:08 crc kubenswrapper[4739]: I0218 14:34:08.964420 4739 generic.go:334] "Generic (PLEG): container finished" podID="884f40e4-492b-4f73-94a7-8be81bde150e" containerID="18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c" exitCode=0 Feb 18 14:34:08 crc kubenswrapper[4739]: I0218 14:34:08.964484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerDied","Data":"18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c"} Feb 18 14:34:09 crc kubenswrapper[4739]: I0218 14:34:09.789274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:09 crc kubenswrapper[4739]: I0218 14:34:09.789351 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.524223 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586469 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.596473 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj" (OuterVolumeSpecName: "kube-api-access-h48dj") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "kube-api-access-h48dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.621290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.622894 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory" (OuterVolumeSpecName: "inventory") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689279 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689318 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689331 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.845387 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hvzqm" podUID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerName="registry-server" probeResult="failure" output=< Feb 18 14:34:10 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:34:10 crc kubenswrapper[4739]: > Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerDied","Data":"cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77"} Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008881 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008898 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.106580 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:11 crc kubenswrapper[4739]: E0218 14:34:11.107189 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.107210 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.107470 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.108344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112372 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112377 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.115994 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.119588 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.198666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.199107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.199134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.301900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.301950 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.302268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.317630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.319541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.321655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.428056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.979067 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:12 crc kubenswrapper[4739]: I0218 14:34:12.020594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerStarted","Data":"a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241"} Feb 18 14:34:13 crc kubenswrapper[4739]: I0218 14:34:13.034651 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerStarted","Data":"a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103"} Feb 18 14:34:13 crc kubenswrapper[4739]: I0218 14:34:13.068993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" podStartSLOduration=1.643496917 podStartE2EDuration="2.06897005s" podCreationTimestamp="2026-02-18 14:34:11 +0000 UTC" firstStartedPulling="2026-02-18 14:34:11.986689737 +0000 UTC m=+2084.482410659" lastFinishedPulling="2026-02-18 14:34:12.41216287 +0000 UTC m=+2084.907883792" observedRunningTime="2026-02-18 14:34:13.054172853 +0000 UTC m=+2085.549893875" watchObservedRunningTime="2026-02-18 14:34:13.06897005 +0000 UTC m=+2085.564690972" Feb 18 14:34:19 crc kubenswrapper[4739]: I0218 14:34:19.843563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:19 crc kubenswrapper[4739]: I0218 14:34:19.903246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.507156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.667485 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.667783 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5478" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" containerID="cri-o://65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" gracePeriod=2 Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.141821 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" exitCode=0 Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.142087 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64"} Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.285063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461254 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461296 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.465107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities" (OuterVolumeSpecName: "utilities") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.492903 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj" (OuterVolumeSpecName: "kube-api-access-zzvjj") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "kube-api-access-zzvjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.565220 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.565493 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.661311 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.667505 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.156972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca"} Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.157039 4739 scope.go:117] "RemoveContainer" containerID="65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.157096 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.185684 4739 scope.go:117] "RemoveContainer" containerID="eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.208860 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.222969 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.226625 4739 scope.go:117] "RemoveContainer" containerID="cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.422415 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" path="/var/lib/kubelet/pods/6eb612bd-4974-4e9b-91d7-0240ce057aa5/volumes" Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.049211 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.068219 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.425076 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" path="/var/lib/kubelet/pods/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c/volumes" Feb 18 14:34:29 crc kubenswrapper[4739]: I0218 14:34:29.373146 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:29 crc kubenswrapper[4739]: I0218 14:34:29.373862 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.413632 4739 scope.go:117] "RemoveContainer" containerID="633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.470038 4739 scope.go:117] "RemoveContainer" containerID="c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.527556 4739 scope.go:117] "RemoveContainer" containerID="f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.585064 4739 scope.go:117] "RemoveContainer" containerID="67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807" Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.051940 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.063053 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.425478 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" path="/var/lib/kubelet/pods/147cff80-30af-4fc7-961f-5f6e17af51bb/volumes" Feb 18 14:34:46 crc kubenswrapper[4739]: I0218 14:34:46.422151 4739 generic.go:334] "Generic (PLEG): container finished" podID="af925314-bcd8-4373-b57e-612251a9687a" containerID="a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103" exitCode=0 Feb 18 14:34:46 crc kubenswrapper[4739]: I0218 14:34:46.423293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerDied","Data":"a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103"} Feb 18 14:34:47 crc kubenswrapper[4739]: I0218 14:34:47.941360 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103266 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103624 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.120936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6" (OuterVolumeSpecName: "kube-api-access-6zdb6") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "kube-api-access-6zdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.138251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.150676 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory" (OuterVolumeSpecName: "inventory") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206352 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206402 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206417 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445271 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerDied","Data":"a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241"} Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445529 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445368 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.536780 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.537812 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.537887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.537953 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-content" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538037 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-content" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.538110 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538176 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.538237 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-utilities" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538290 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-utilities" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538603 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538676 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.539594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.545025 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.545788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.548283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.560308 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.572167 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.721263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.722007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.722402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.833405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.833588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.851263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.860137 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:49 crc kubenswrapper[4739]: I0218 14:34:49.460266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.467564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerStarted","Data":"d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d"} Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.467864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerStarted","Data":"b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571"} Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.491515 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" podStartSLOduration=2.073546206 podStartE2EDuration="2.491489988s" podCreationTimestamp="2026-02-18 14:34:48 +0000 UTC" firstStartedPulling="2026-02-18 14:34:49.464092244 +0000 UTC m=+2121.959813166" lastFinishedPulling="2026-02-18 14:34:49.882036026 +0000 UTC m=+2122.377756948" observedRunningTime="2026-02-18 14:34:50.484995492 +0000 UTC m=+2122.980716414" watchObservedRunningTime="2026-02-18 14:34:50.491489988 +0000 UTC m=+2122.987210920" Feb 18 14:34:59 crc kubenswrapper[4739]: I0218 14:34:59.372997 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:59 crc kubenswrapper[4739]: I0218 14:34:59.373697 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.372860 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.373377 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.373503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.374408 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.374482 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" gracePeriod=600 Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897155 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" exitCode=0 Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897286 4739 scope.go:117] "RemoveContainer" containerID="eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" Feb 18 14:35:30 crc kubenswrapper[4739]: E0218 14:35:30.028529 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:30 crc kubenswrapper[4739]: I0218 14:35:30.910035 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:30 crc kubenswrapper[4739]: E0218 14:35:30.910668 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:35 crc kubenswrapper[4739]: E0218 14:35:35.830759 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8795d84c_3a90_438c_8f2b_066cd875316d.slice/crio-d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8795d84c_3a90_438c_8f2b_066cd875316d.slice/crio-conmon-d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:35:35 crc kubenswrapper[4739]: I0218 14:35:35.960656 4739 generic.go:334] "Generic (PLEG): container finished" podID="8795d84c-3a90-438c-8f2b-066cd875316d" containerID="d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d" exitCode=0 Feb 18 14:35:35 crc kubenswrapper[4739]: I0218 14:35:35.960702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerDied","Data":"d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d"} Feb 18 14:35:36 crc kubenswrapper[4739]: I0218 14:35:36.739534 4739 scope.go:117] "RemoveContainer" containerID="719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.545954 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.727811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.728871 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.728932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.736199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js" (OuterVolumeSpecName: "kube-api-access-hx2js") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "kube-api-access-hx2js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.771093 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory" (OuterVolumeSpecName: "inventory") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.771690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831850 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831885 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831913 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerDied","Data":"b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571"} Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981404 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.065382 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:38 crc kubenswrapper[4739]: E0218 14:35:38.066035 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.066060 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.066325 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.067255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.077114 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.105961 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106017 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106191 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106368 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241034 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344243 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.349016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.349143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.365844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.443250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:39 crc kubenswrapper[4739]: I0218 14:35:39.019932 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:35:39 crc kubenswrapper[4739]: I0218 14:35:39.024535 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.003661 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerStarted","Data":"187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667"} Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.003904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerStarted","Data":"50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6"} Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.026234 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" podStartSLOduration=1.426029508 podStartE2EDuration="2.026208146s" podCreationTimestamp="2026-02-18 14:35:38 +0000 UTC" firstStartedPulling="2026-02-18 14:35:39.019163414 +0000 UTC m=+2171.514884336" lastFinishedPulling="2026-02-18 14:35:39.619342052 +0000 UTC m=+2172.115062974" observedRunningTime="2026-02-18 14:35:40.016990201 +0000 UTC m=+2172.512711133" watchObservedRunningTime="2026-02-18 14:35:40.026208146 +0000 UTC m=+2172.521929088" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.277045 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.287486 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.302136 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320465 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.410628 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:44 crc kubenswrapper[4739]: E0218 14:35:44.411073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423472 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.446427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.621838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:45 crc kubenswrapper[4739]: I0218 14:35:45.158930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084503 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" exitCode=0 Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8"} Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"e98eb27ec0270c6671d9c8a8131aa295c8aeb989346be9138e5f12fa0696debd"} Feb 18 14:35:47 crc kubenswrapper[4739]: I0218 14:35:47.103808 4739 generic.go:334] "Generic (PLEG): container finished" podID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerID="187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667" exitCode=0 Feb 18 14:35:47 crc kubenswrapper[4739]: I0218 14:35:47.104199 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerDied","Data":"187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667"} Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.137303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.674122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.751889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.752059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.752133 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.776731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv" (OuterVolumeSpecName: "kube-api-access-wbgfv") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "kube-api-access-wbgfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.792990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.793647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861809 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861845 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861855 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerDied","Data":"50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6"} Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150909 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150752 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.153267 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" exitCode=0 Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.153313 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.230000 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:49 crc kubenswrapper[4739]: E0218 14:35:49.231231 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.231263 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.231580 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.232575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.238934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.239180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.239379 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.240110 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.276784 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.374734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.375138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.375367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.483103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.495854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.496111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.571980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.064022 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.067936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.081369 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.103997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.104100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.104343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.172998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.202536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ct24c" podStartSLOduration=2.71622132 podStartE2EDuration="6.202511499s" podCreationTimestamp="2026-02-18 14:35:44 +0000 UTC" firstStartedPulling="2026-02-18 14:35:46.087141987 +0000 UTC m=+2178.582862909" lastFinishedPulling="2026-02-18 14:35:49.573432166 +0000 UTC m=+2182.069153088" observedRunningTime="2026-02-18 14:35:50.196908306 +0000 UTC m=+2182.692629238" watchObservedRunningTime="2026-02-18 14:35:50.202511499 +0000 UTC m=+2182.698232421" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206855 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206880 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.229671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.279415 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.399914 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: W0218 14:35:50.928341 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3294ebfc_1c27_44e3_a94e_ef98dfd9f0f1.slice/crio-cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef WatchSource:0}: Error finding container cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef: Status 404 returned error can't find the container with id cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.942083 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:51 crc kubenswrapper[4739]: I0218 14:35:51.196475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerStarted","Data":"2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362"} Feb 18 14:35:51 crc kubenswrapper[4739]: I0218 14:35:51.199667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.213967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerStarted","Data":"0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.217790 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" exitCode=0 Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.217836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.239748 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" podStartSLOduration=2.803240743 podStartE2EDuration="3.239726569s" podCreationTimestamp="2026-02-18 14:35:49 +0000 UTC" firstStartedPulling="2026-02-18 14:35:50.289944952 +0000 UTC m=+2182.785665874" lastFinishedPulling="2026-02-18 14:35:50.726430788 +0000 UTC m=+2183.222151700" observedRunningTime="2026-02-18 14:35:52.230602997 +0000 UTC m=+2184.726323929" watchObservedRunningTime="2026-02-18 14:35:52.239726569 +0000 UTC m=+2184.735447491" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.237690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.622490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.622554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.673550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:55 crc kubenswrapper[4739]: I0218 14:35:55.302695 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.259755 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" exitCode=0 Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.259847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.410646 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:56 crc kubenswrapper[4739]: E0218 14:35:56.411022 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.054217 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.272104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.272285 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ct24c" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" containerID="cri-o://e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" gracePeriod=2 Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.305713 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk57d" podStartSLOduration=2.644625662 podStartE2EDuration="7.305692877s" podCreationTimestamp="2026-02-18 14:35:50 +0000 UTC" firstStartedPulling="2026-02-18 14:35:52.219889414 +0000 UTC m=+2184.715610336" lastFinishedPulling="2026-02-18 14:35:56.880956629 +0000 UTC m=+2189.376677551" observedRunningTime="2026-02-18 14:35:57.297751205 +0000 UTC m=+2189.793472137" watchObservedRunningTime="2026-02-18 14:35:57.305692877 +0000 UTC m=+2189.801413799" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.843860 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911138 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911254 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.912185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities" (OuterVolumeSpecName: "utilities") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.930949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7" (OuterVolumeSpecName: "kube-api-access-htdm7") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "kube-api-access-htdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.957204 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014155 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014189 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014199 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284478 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" exitCode=0 Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"e98eb27ec0270c6671d9c8a8131aa295c8aeb989346be9138e5f12fa0696debd"} Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284571 4739 scope.go:117] "RemoveContainer" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284578 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.320125 4739 scope.go:117] "RemoveContainer" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.323572 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.336195 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.389139 4739 scope.go:117] "RemoveContainer" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.423208 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" path="/var/lib/kubelet/pods/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0/volumes" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.462823 4739 scope.go:117] "RemoveContainer" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.463806 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": container with ID starting with e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6 not found: ID does not exist" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.463860 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} err="failed to get container status \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": rpc error: code = NotFound desc = could not find container \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": container with ID starting with e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6 not found: ID does not exist" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.463894 4739 scope.go:117] "RemoveContainer" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.464348 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": container with ID starting with 098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4 not found: ID does not exist" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464382 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} err="failed to get container status \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": rpc error: code = NotFound desc = could not find container \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": container with ID starting with 098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4 not found: ID does not exist" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464563 4739 scope.go:117] "RemoveContainer" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.464944 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": container with ID starting with 66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8 not found: ID does not exist" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464974 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8"} err="failed to get container status \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": rpc error: code = NotFound desc = could not find container \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": container with ID starting with 66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8 not found: ID does not exist" Feb 18 14:35:59 crc kubenswrapper[4739]: I0218 14:35:59.298462 4739 generic.go:334] "Generic (PLEG): container finished" podID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerID="0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606" exitCode=0 Feb 18 14:35:59 crc kubenswrapper[4739]: I0218 14:35:59.298501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerDied","Data":"0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606"} Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.400041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.400368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.479494 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.804033 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884561 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884750 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884850 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.889897 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5" (OuterVolumeSpecName: "kube-api-access-zd9s5") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "kube-api-access-zd9s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.913022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.916992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory" (OuterVolumeSpecName: "inventory") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988892 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988934 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988947 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerDied","Data":"2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362"} Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338643 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338489 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396180 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396702 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396725 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396732 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396750 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-utilities" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396757 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-utilities" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396776 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-content" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396781 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-content" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396989 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.397013 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.397767 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401845 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401874 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.411081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.439772 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.499968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.500061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.500516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.603973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.604059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.604150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.609207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.612524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.619356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.757840 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:02 crc kubenswrapper[4739]: I0218 14:36:02.308232 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:02 crc kubenswrapper[4739]: I0218 14:36:02.348523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerStarted","Data":"0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24"} Feb 18 14:36:03 crc kubenswrapper[4739]: I0218 14:36:03.369436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerStarted","Data":"09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed"} Feb 18 14:36:03 crc kubenswrapper[4739]: I0218 14:36:03.399193 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" podStartSLOduration=1.917995253 podStartE2EDuration="2.399175446s" podCreationTimestamp="2026-02-18 14:36:01 +0000 UTC" firstStartedPulling="2026-02-18 14:36:02.314548292 +0000 UTC m=+2194.810269214" lastFinishedPulling="2026-02-18 14:36:02.795728485 +0000 UTC m=+2195.291449407" observedRunningTime="2026-02-18 14:36:03.389724396 +0000 UTC m=+2195.885445348" watchObservedRunningTime="2026-02-18 14:36:03.399175446 +0000 UTC m=+2195.894896368" Feb 18 14:36:07 crc kubenswrapper[4739]: I0218 14:36:07.053880 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:36:07 crc kubenswrapper[4739]: I0218 14:36:07.067465 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:36:08 crc kubenswrapper[4739]: I0218 14:36:08.424967 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" path="/var/lib/kubelet/pods/6e0a952f-ef12-46c6-8ca8-10f016b441be/volumes" Feb 18 14:36:09 crc kubenswrapper[4739]: I0218 14:36:09.411792 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:09 crc kubenswrapper[4739]: E0218 14:36:09.412961 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:10 crc kubenswrapper[4739]: I0218 14:36:10.461241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:10 crc kubenswrapper[4739]: I0218 14:36:10.520696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:11 crc kubenswrapper[4739]: I0218 14:36:11.506621 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk57d" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" containerID="cri-o://db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" gracePeriod=2 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.039265 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094430 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094576 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.096140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities" (OuterVolumeSpecName: "utilities") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.096952 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.101677 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p" (OuterVolumeSpecName: "kube-api-access-xxs7p") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "kube-api-access-xxs7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.152479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.199510 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.199549 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.519411 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerID="09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed" exitCode=0 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.519496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerDied","Data":"09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.522951 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" exitCode=0 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523072 4739 scope.go:117] "RemoveContainer" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.559665 4739 scope.go:117] "RemoveContainer" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.574016 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.581587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.589378 4739 scope.go:117] "RemoveContainer" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652239 4739 scope.go:117] "RemoveContainer" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.652725 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": container with ID starting with db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949 not found: ID does not exist" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652756 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} err="failed to get container status \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": rpc error: code = NotFound desc = could not find container \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": container with ID starting with db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949 not found: ID does not exist" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652781 4739 scope.go:117] "RemoveContainer" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.653148 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": container with ID starting with a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e not found: ID does not exist" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653188 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} err="failed to get container status \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": rpc error: code = NotFound desc = could not find container \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": container with ID starting with a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e not found: ID does not exist" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653215 4739 scope.go:117] "RemoveContainer" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.653768 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": container with ID starting with 52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b not found: ID does not exist" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653815 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b"} err="failed to get container status \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": rpc error: code = NotFound desc = could not find container \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": container with ID starting with 52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b not found: ID does not exist" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.064813 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.162847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp" (OuterVolumeSpecName: "kube-api-access-tq7fp") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "kube-api-access-tq7fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.189249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory" (OuterVolumeSpecName: "inventory") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.193599 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255028 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255360 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255561 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.431246 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" path="/var/lib/kubelet/pods/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1/volumes" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerDied","Data":"0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24"} Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562828 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562584 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651216 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651868 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651894 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651916 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-utilities" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-utilities" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651945 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651967 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-content" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651975 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-content" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.652258 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.652282 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.656376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659055 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659389 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659537 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659635 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.660714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.660724 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.661774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.662514 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.783897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.783978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784079 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784318 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784795 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888506 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888858 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888911 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889158 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.894885 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.896906 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.900614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.901594 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.901650 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.910412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.991481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:15 crc kubenswrapper[4739]: I0218 14:36:15.646081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.590671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerStarted","Data":"df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542"} Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.591718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerStarted","Data":"b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993"} Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.617236 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" podStartSLOduration=2.143170957 podStartE2EDuration="2.617216678s" podCreationTimestamp="2026-02-18 14:36:14 +0000 UTC" firstStartedPulling="2026-02-18 14:36:15.661182094 +0000 UTC m=+2208.156903016" lastFinishedPulling="2026-02-18 14:36:16.135227815 +0000 UTC m=+2208.630948737" observedRunningTime="2026-02-18 14:36:16.612437147 +0000 UTC m=+2209.108158069" watchObservedRunningTime="2026-02-18 14:36:16.617216678 +0000 UTC m=+2209.112937600" Feb 18 14:36:21 crc kubenswrapper[4739]: I0218 14:36:21.412199 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:21 crc kubenswrapper[4739]: E0218 14:36:21.412991 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:33 crc kubenswrapper[4739]: I0218 14:36:33.410464 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:33 crc kubenswrapper[4739]: E0218 14:36:33.411333 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:36 crc kubenswrapper[4739]: I0218 14:36:36.857080 4739 scope.go:117] "RemoveContainer" containerID="03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862" Feb 18 14:36:48 crc kubenswrapper[4739]: I0218 14:36:48.418813 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:48 crc kubenswrapper[4739]: E0218 14:36:48.419692 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.054773 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.068952 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.423831 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" path="/var/lib/kubelet/pods/18e3b1f2-e16d-4800-90db-c4cc03f891c3/volumes" Feb 18 14:36:56 crc kubenswrapper[4739]: I0218 14:36:56.007589 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerID="df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542" exitCode=0 Feb 18 14:36:56 crc kubenswrapper[4739]: I0218 14:36:56.007716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerDied","Data":"df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542"} Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.499742 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606341 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606473 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606828 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.613632 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615911 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.616285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.617129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.617773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h" (OuterVolumeSpecName: "kube-api-access-wnx4h") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "kube-api-access-wnx4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.621669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.622396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.625958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.628794 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.660098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.668276 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory" (OuterVolumeSpecName: "inventory") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.710965 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711025 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711045 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711058 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711070 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711080 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711114 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711126 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711136 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711148 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711159 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711197 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711209 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711223 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711237 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711274 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerDied","Data":"b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993"} Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032698 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143013 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:58 crc kubenswrapper[4739]: E0218 14:36:58.143684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143711 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143970 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.145111 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.148558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.148646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.157596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.223155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.326073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.327078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.330948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.332193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.344725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.355474 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.479808 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:59 crc kubenswrapper[4739]: I0218 14:36:59.021555 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:59 crc kubenswrapper[4739]: I0218 14:36:59.052465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerStarted","Data":"15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca"} Feb 18 14:37:00 crc kubenswrapper[4739]: I0218 14:37:00.064661 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerStarted","Data":"40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c"} Feb 18 14:37:00 crc kubenswrapper[4739]: I0218 14:37:00.085539 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" podStartSLOduration=1.670493838 podStartE2EDuration="2.085519448s" podCreationTimestamp="2026-02-18 14:36:58 +0000 UTC" firstStartedPulling="2026-02-18 14:36:59.025266525 +0000 UTC m=+2251.520987447" lastFinishedPulling="2026-02-18 14:36:59.440292135 +0000 UTC m=+2251.936013057" observedRunningTime="2026-02-18 14:37:00.083566118 +0000 UTC m=+2252.579287060" watchObservedRunningTime="2026-02-18 14:37:00.085519448 +0000 UTC m=+2252.581240390" Feb 18 14:37:02 crc kubenswrapper[4739]: I0218 14:37:02.411371 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:02 crc kubenswrapper[4739]: E0218 14:37:02.412246 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:16 crc kubenswrapper[4739]: I0218 14:37:16.411591 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:16 crc kubenswrapper[4739]: E0218 14:37:16.413265 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:27 crc kubenswrapper[4739]: I0218 14:37:27.410218 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:27 crc kubenswrapper[4739]: E0218 14:37:27.411141 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:36 crc kubenswrapper[4739]: I0218 14:37:36.984575 4739 scope.go:117] "RemoveContainer" containerID="ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843" Feb 18 14:37:39 crc kubenswrapper[4739]: I0218 14:37:39.410653 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:39 crc kubenswrapper[4739]: E0218 14:37:39.411570 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:51 crc kubenswrapper[4739]: I0218 14:37:51.411001 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:51 crc kubenswrapper[4739]: E0218 14:37:51.412004 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:54 crc kubenswrapper[4739]: I0218 14:37:54.636594 4739 generic.go:334] "Generic (PLEG): container finished" podID="c4382bff-5480-4a55-ad49-e6293729f738" containerID="40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c" exitCode=0 Feb 18 14:37:54 crc kubenswrapper[4739]: I0218 14:37:54.636654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerDied","Data":"40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c"} Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.109254 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250469 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250504 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.256745 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx" (OuterVolumeSpecName: "kube-api-access-btmmx") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "kube-api-access-btmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.256935 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.283350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.283707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.285306 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory" (OuterVolumeSpecName: "inventory") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353612 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353641 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353650 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353659 4739 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353669 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerDied","Data":"15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca"} Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666060 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666112 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.758626 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:56 crc kubenswrapper[4739]: E0218 14:37:56.759231 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.759256 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.759568 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.760624 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.776182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788037 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788256 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788313 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788478 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867884 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.972080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.972152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.978056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.979549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.980197 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.982794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.984889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.990751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:57 crc kubenswrapper[4739]: I0218 14:37:57.124141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:57 crc kubenswrapper[4739]: I0218 14:37:57.708507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:58 crc kubenswrapper[4739]: I0218 14:37:58.687729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerStarted","Data":"38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069"} Feb 18 14:37:59 crc kubenswrapper[4739]: I0218 14:37:59.700062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerStarted","Data":"d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac"} Feb 18 14:37:59 crc kubenswrapper[4739]: I0218 14:37:59.728469 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" podStartSLOduration=2.216023198 podStartE2EDuration="3.728432308s" podCreationTimestamp="2026-02-18 14:37:56 +0000 UTC" firstStartedPulling="2026-02-18 14:37:57.723654978 +0000 UTC m=+2310.219375900" lastFinishedPulling="2026-02-18 14:37:59.236064088 +0000 UTC m=+2311.731785010" observedRunningTime="2026-02-18 14:37:59.717177142 +0000 UTC m=+2312.212898064" watchObservedRunningTime="2026-02-18 14:37:59.728432308 +0000 UTC m=+2312.224153230" Feb 18 14:38:03 crc kubenswrapper[4739]: I0218 14:38:03.411622 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:03 crc kubenswrapper[4739]: E0218 14:38:03.412675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:16 crc kubenswrapper[4739]: I0218 14:38:16.417998 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:16 crc kubenswrapper[4739]: E0218 14:38:16.418704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:27 crc kubenswrapper[4739]: I0218 14:38:27.410196 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:27 crc kubenswrapper[4739]: E0218 14:38:27.410983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:41 crc kubenswrapper[4739]: I0218 14:38:41.411488 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:41 crc kubenswrapper[4739]: E0218 14:38:41.412535 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:43 crc kubenswrapper[4739]: I0218 14:38:43.189244 4739 generic.go:334] "Generic (PLEG): container finished" podID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerID="d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac" exitCode=0 Feb 18 14:38:43 crc kubenswrapper[4739]: I0218 14:38:43.189322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerDied","Data":"d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac"} Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.750645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829152 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829482 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829636 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.835635 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl" (OuterVolumeSpecName: "kube-api-access-xddcl") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "kube-api-access-xddcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.844479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.873018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.873348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.876137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.893044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory" (OuterVolumeSpecName: "inventory") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933752 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933793 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933807 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933817 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933826 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933838 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerDied","Data":"38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069"} Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212374 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212433 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.320468 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:45 crc kubenswrapper[4739]: E0218 14:38:45.321041 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.321064 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.321475 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.322504 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.325899 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326131 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341767 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.342059 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.362824 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.444685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.444759 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.450031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.450250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.451243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.451645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.468296 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.658549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:46 crc kubenswrapper[4739]: I0218 14:38:46.209241 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:46 crc kubenswrapper[4739]: I0218 14:38:46.225922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerStarted","Data":"7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1"} Feb 18 14:38:47 crc kubenswrapper[4739]: I0218 14:38:47.238008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerStarted","Data":"0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8"} Feb 18 14:38:47 crc kubenswrapper[4739]: I0218 14:38:47.268888 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" podStartSLOduration=1.624339532 podStartE2EDuration="2.268859481s" podCreationTimestamp="2026-02-18 14:38:45 +0000 UTC" firstStartedPulling="2026-02-18 14:38:46.217314712 +0000 UTC m=+2358.713035644" lastFinishedPulling="2026-02-18 14:38:46.861834671 +0000 UTC m=+2359.357555593" observedRunningTime="2026-02-18 14:38:47.260693663 +0000 UTC m=+2359.756414605" watchObservedRunningTime="2026-02-18 14:38:47.268859481 +0000 UTC m=+2359.764580403" Feb 18 14:38:55 crc kubenswrapper[4739]: I0218 14:38:55.410502 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:55 crc kubenswrapper[4739]: E0218 14:38:55.411281 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:08 crc kubenswrapper[4739]: I0218 14:39:08.418306 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:08 crc kubenswrapper[4739]: E0218 14:39:08.419166 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:23 crc kubenswrapper[4739]: I0218 14:39:23.411131 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:23 crc kubenswrapper[4739]: E0218 14:39:23.412122 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:36 crc kubenswrapper[4739]: I0218 14:39:36.412686 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:36 crc kubenswrapper[4739]: E0218 14:39:36.414091 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:49 crc kubenswrapper[4739]: I0218 14:39:49.410682 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:49 crc kubenswrapper[4739]: E0218 14:39:49.411501 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:00 crc kubenswrapper[4739]: I0218 14:40:00.410514 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:00 crc kubenswrapper[4739]: E0218 14:40:00.411463 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:12 crc kubenswrapper[4739]: I0218 14:40:12.410603 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:12 crc kubenswrapper[4739]: E0218 14:40:12.411371 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:26 crc kubenswrapper[4739]: I0218 14:40:26.413349 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:26 crc kubenswrapper[4739]: E0218 14:40:26.414299 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:37 crc kubenswrapper[4739]: I0218 14:40:37.410951 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:38 crc kubenswrapper[4739]: I0218 14:40:38.290718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} Feb 18 14:41:57 crc kubenswrapper[4739]: I0218 14:41:57.019061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-668fffc447-mjpk7" podUID="ac478be7-1c16-4a7f-a2d2-618cfe76c3d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 14:42:17 crc kubenswrapper[4739]: I0218 14:42:17.312005 4739 generic.go:334] "Generic (PLEG): container finished" podID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerID="0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8" exitCode=0 Feb 18 14:42:17 crc kubenswrapper[4739]: I0218 14:42:17.312210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerDied","Data":"0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8"} Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.820565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947368 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.953654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.956548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp" (OuterVolumeSpecName: "kube-api-access-db6tp") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "kube-api-access-db6tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.980126 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.982927 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.006719 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory" (OuterVolumeSpecName: "inventory") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050308 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050349 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050360 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050369 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050379 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.336413 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.338672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerDied","Data":"7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1"} Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.338731 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.440282 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:19 crc kubenswrapper[4739]: E0218 14:42:19.440913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.440933 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.441225 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.442356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449668 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449959 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450401 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450590 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.460883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.561973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563435 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563651 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564154 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564251 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.666882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.666985 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667933 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.668086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.668172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.669623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671548 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672961 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.673463 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.673954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.676186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.690438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.762104 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:20 crc kubenswrapper[4739]: W0218 14:42:20.379574 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b26802_db14_4190_99d1_9c9c7403195b.slice/crio-5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082 WatchSource:0}: Error finding container 5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082: Status 404 returned error can't find the container with id 5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082 Feb 18 14:42:20 crc kubenswrapper[4739]: I0218 14:42:20.379623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:20 crc kubenswrapper[4739]: I0218 14:42:20.381785 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.360302 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerStarted","Data":"221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7"} Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.360567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerStarted","Data":"5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082"} Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.392078 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" podStartSLOduration=1.799046391 podStartE2EDuration="2.392059512s" podCreationTimestamp="2026-02-18 14:42:19 +0000 UTC" firstStartedPulling="2026-02-18 14:42:20.38154717 +0000 UTC m=+2572.877268082" lastFinishedPulling="2026-02-18 14:42:20.974560281 +0000 UTC m=+2573.470281203" observedRunningTime="2026-02-18 14:42:21.381526528 +0000 UTC m=+2573.877247470" watchObservedRunningTime="2026-02-18 14:42:21.392059512 +0000 UTC m=+2573.887780434" Feb 18 14:42:59 crc kubenswrapper[4739]: I0218 14:42:59.374166 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:42:59 crc kubenswrapper[4739]: I0218 14:42:59.374928 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:29 crc kubenswrapper[4739]: I0218 14:43:29.373045 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:29 crc kubenswrapper[4739]: I0218 14:43:29.373699 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.372491 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.372978 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373926 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373974 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" gracePeriod=600 Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667203 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" exitCode=0 Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667508 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:44:00 crc kubenswrapper[4739]: I0218 14:44:00.679707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} Feb 18 14:44:31 crc kubenswrapper[4739]: I0218 14:44:31.991135 4739 generic.go:334] "Generic (PLEG): container finished" podID="08b26802-db14-4190-99d1-9c9c7403195b" containerID="221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7" exitCode=0 Feb 18 14:44:31 crc kubenswrapper[4739]: I0218 14:44:31.991234 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerDied","Data":"221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7"} Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.629551 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.709869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.709971 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710070 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710296 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710862 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.711011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.711066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.715586 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b" (OuterVolumeSpecName: "kube-api-access-8q46b") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "kube-api-access-8q46b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.737914 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.742382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.749087 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.751172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.755394 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.755981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.764175 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory" (OuterVolumeSpecName: "inventory") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.782822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.783818 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.785592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814378 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814409 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814418 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814428 4739 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814437 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814461 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814469 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814477 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814485 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814494 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814502 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerDied","Data":"5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082"} Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027538 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027605 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.111343 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:34 crc kubenswrapper[4739]: E0218 14:44:34.111933 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.111956 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.112207 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.113561 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.120200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.120215 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121072 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121264 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121380 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.127347 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224355 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224408 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224907 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.225125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.225245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327796 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327826 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327855 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.331844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.333042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.346745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.448842 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:35 crc kubenswrapper[4739]: I0218 14:44:35.055891 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.063562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerStarted","Data":"fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa"} Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.063621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerStarted","Data":"fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200"} Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.091788 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" podStartSLOduration=1.669031812 podStartE2EDuration="2.091760547s" podCreationTimestamp="2026-02-18 14:44:34 +0000 UTC" firstStartedPulling="2026-02-18 14:44:35.060847637 +0000 UTC m=+2707.556568559" lastFinishedPulling="2026-02-18 14:44:35.483576372 +0000 UTC m=+2707.979297294" observedRunningTime="2026-02-18 14:44:36.080807671 +0000 UTC m=+2708.576528613" watchObservedRunningTime="2026-02-18 14:44:36.091760547 +0000 UTC m=+2708.587481469" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.453920 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.456887 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.474811 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585820 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688432 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688760 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.689384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.689989 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.713954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.794340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:48 crc kubenswrapper[4739]: I0218 14:44:48.325979 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230325 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" exitCode=0 Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e"} Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"d383f10ea29e5502944b4b1aab6dcf695aa1257aa566befeddf73868399abf6a"} Feb 18 14:44:50 crc kubenswrapper[4739]: I0218 14:44:50.242898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} Feb 18 14:44:55 crc kubenswrapper[4739]: I0218 14:44:55.300889 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" exitCode=0 Feb 18 14:44:55 crc kubenswrapper[4739]: I0218 14:44:55.300992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} Feb 18 14:44:56 crc kubenswrapper[4739]: I0218 14:44:56.317827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} Feb 18 14:44:56 crc kubenswrapper[4739]: I0218 14:44:56.351046 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hwc4" podStartSLOduration=2.791429591 podStartE2EDuration="9.351022163s" podCreationTimestamp="2026-02-18 14:44:47 +0000 UTC" firstStartedPulling="2026-02-18 14:44:49.237936843 +0000 UTC m=+2721.733657765" lastFinishedPulling="2026-02-18 14:44:55.797529415 +0000 UTC m=+2728.293250337" observedRunningTime="2026-02-18 14:44:56.344632743 +0000 UTC m=+2728.840353685" watchObservedRunningTime="2026-02-18 14:44:56.351022163 +0000 UTC m=+2728.846743085" Feb 18 14:44:57 crc kubenswrapper[4739]: I0218 14:44:57.795270 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:57 crc kubenswrapper[4739]: I0218 14:44:57.795848 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:58 crc kubenswrapper[4739]: I0218 14:44:58.857676 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:44:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:44:58 crc kubenswrapper[4739]: > Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.154430 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.156997 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.159893 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.165336 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.168000 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.434980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.435070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.435259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.436314 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.441411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.455566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.491296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.027232 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:01 crc kubenswrapper[4739]: W0218 14:45:01.028749 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd759be05_a3d9_4dd0_b360_dc1f752b84be.slice/crio-f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc WatchSource:0}: Error finding container f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc: Status 404 returned error can't find the container with id f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.373748 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerStarted","Data":"aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48"} Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.374946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerStarted","Data":"f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc"} Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.402518 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" podStartSLOduration=1.402492329 podStartE2EDuration="1.402492329s" podCreationTimestamp="2026-02-18 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:45:01.391572614 +0000 UTC m=+2733.887293546" watchObservedRunningTime="2026-02-18 14:45:01.402492329 +0000 UTC m=+2733.898213251" Feb 18 14:45:02 crc kubenswrapper[4739]: I0218 14:45:02.386257 4739 generic.go:334] "Generic (PLEG): container finished" podID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerID="aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48" exitCode=0 Feb 18 14:45:02 crc kubenswrapper[4739]: I0218 14:45:02.386307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerDied","Data":"aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48"} Feb 18 14:45:03 crc kubenswrapper[4739]: I0218 14:45:03.881946 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.043829 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044425 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume" (OuterVolumeSpecName: "config-volume") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044590 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.045532 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.050268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9" (OuterVolumeSpecName: "kube-api-access-h68t9") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "kube-api-access-h68t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.050607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.149820 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.149859 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.412249 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.438151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerDied","Data":"f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc"} Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.438206 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.491308 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.504261 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:45:06 crc kubenswrapper[4739]: I0218 14:45:06.447065 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" path="/var/lib/kubelet/pods/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0/volumes" Feb 18 14:45:08 crc kubenswrapper[4739]: I0218 14:45:08.869110 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:45:08 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:45:08 crc kubenswrapper[4739]: > Feb 18 14:45:18 crc kubenswrapper[4739]: I0218 14:45:18.844755 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:45:18 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:45:18 crc kubenswrapper[4739]: > Feb 18 14:45:27 crc kubenswrapper[4739]: I0218 14:45:27.861885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:27 crc kubenswrapper[4739]: I0218 14:45:27.921271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:28 crc kubenswrapper[4739]: I0218 14:45:28.111148 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:29 crc kubenswrapper[4739]: I0218 14:45:29.677236 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" containerID="cri-o://04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" gracePeriod=2 Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.346378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.407770 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.407845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.408128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.408557 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities" (OuterVolumeSpecName: "utilities") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.409579 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.415750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb" (OuterVolumeSpecName: "kube-api-access-gnjqb") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "kube-api-access-gnjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.513093 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.552926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.615077 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690432 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" exitCode=0 Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"d383f10ea29e5502944b4b1aab6dcf695aa1257aa566befeddf73868399abf6a"} Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690543 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690543 4739 scope.go:117] "RemoveContainer" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.720035 4739 scope.go:117] "RemoveContainer" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.737580 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.752632 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.757994 4739 scope.go:117] "RemoveContainer" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802362 4739 scope.go:117] "RemoveContainer" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.802864 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": container with ID starting with 04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424 not found: ID does not exist" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802906 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} err="failed to get container status \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": rpc error: code = NotFound desc = could not find container \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": container with ID starting with 04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424 not found: ID does not exist" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802934 4739 scope.go:117] "RemoveContainer" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.803322 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": container with ID starting with ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc not found: ID does not exist" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803380 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} err="failed to get container status \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": rpc error: code = NotFound desc = could not find container \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": container with ID starting with ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc not found: ID does not exist" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803405 4739 scope.go:117] "RemoveContainer" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.803775 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": container with ID starting with e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e not found: ID does not exist" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803845 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e"} err="failed to get container status \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": rpc error: code = NotFound desc = could not find container \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": container with ID starting with e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e not found: ID does not exist" Feb 18 14:45:32 crc kubenswrapper[4739]: I0218 14:45:32.423354 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" path="/var/lib/kubelet/pods/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4/volumes" Feb 18 14:45:37 crc kubenswrapper[4739]: I0218 14:45:37.240033 4739 scope.go:117] "RemoveContainer" containerID="74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4" Feb 18 14:45:59 crc kubenswrapper[4739]: I0218 14:45:59.372877 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:45:59 crc kubenswrapper[4739]: I0218 14:45:59.373554 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.849498 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.850959 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.850978 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.850998 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-utilities" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851006 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-utilities" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.851027 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-content" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851036 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-content" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.851056 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851065 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851387 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851419 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.853779 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.864099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.882930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.883038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.883080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985824 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.028934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.192514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.760541 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:08 crc kubenswrapper[4739]: W0218 14:46:08.764350 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb797d4b2_d333_4327_b9e7_f4eeec12ae1d.slice/crio-fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106 WatchSource:0}: Error finding container fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106: Status 404 returned error can't find the container with id fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106 Feb 18 14:46:09 crc kubenswrapper[4739]: I0218 14:46:09.127475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} Feb 18 14:46:09 crc kubenswrapper[4739]: I0218 14:46:09.127726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106"} Feb 18 14:46:10 crc kubenswrapper[4739]: I0218 14:46:10.139346 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" exitCode=0 Feb 18 14:46:10 crc kubenswrapper[4739]: I0218 14:46:10.139611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} Feb 18 14:46:11 crc kubenswrapper[4739]: I0218 14:46:11.156180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} Feb 18 14:46:13 crc kubenswrapper[4739]: I0218 14:46:13.180888 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" exitCode=0 Feb 18 14:46:13 crc kubenswrapper[4739]: I0218 14:46:13.180986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} Feb 18 14:46:14 crc kubenswrapper[4739]: I0218 14:46:14.193287 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} Feb 18 14:46:14 crc kubenswrapper[4739]: I0218 14:46:14.220982 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq9l2" podStartSLOduration=3.640985866 podStartE2EDuration="7.220960385s" podCreationTimestamp="2026-02-18 14:46:07 +0000 UTC" firstStartedPulling="2026-02-18 14:46:10.141661344 +0000 UTC m=+2802.637382266" lastFinishedPulling="2026-02-18 14:46:13.721635863 +0000 UTC m=+2806.217356785" observedRunningTime="2026-02-18 14:46:14.21233197 +0000 UTC m=+2806.708052892" watchObservedRunningTime="2026-02-18 14:46:14.220960385 +0000 UTC m=+2806.716681307" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.193361 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.193759 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.247541 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.311289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.499870 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.254271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq9l2" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" containerID="cri-o://124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" gracePeriod=2 Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.792858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936615 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.937759 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities" (OuterVolumeSpecName: "utilities") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.938878 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.951863 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49" (OuterVolumeSpecName: "kube-api-access-5hh49") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "kube-api-access-5hh49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.001479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.040680 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.040711 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271658 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" exitCode=0 Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271733 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106"} Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271749 4739 scope.go:117] "RemoveContainer" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271746 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.299613 4739 scope.go:117] "RemoveContainer" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.306691 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.315836 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.325648 4739 scope.go:117] "RemoveContainer" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383232 4739 scope.go:117] "RemoveContainer" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.383672 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": container with ID starting with 124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c not found: ID does not exist" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383712 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} err="failed to get container status \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": rpc error: code = NotFound desc = could not find container \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": container with ID starting with 124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c not found: ID does not exist" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383742 4739 scope.go:117] "RemoveContainer" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.384102 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": container with ID starting with 553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0 not found: ID does not exist" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384123 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} err="failed to get container status \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": rpc error: code = NotFound desc = could not find container \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": container with ID starting with 553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0 not found: ID does not exist" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384138 4739 scope.go:117] "RemoveContainer" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.384475 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": container with ID starting with e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3 not found: ID does not exist" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384496 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} err="failed to get container status \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": rpc error: code = NotFound desc = could not find container \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": container with ID starting with e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3 not found: ID does not exist" Feb 18 14:46:22 crc kubenswrapper[4739]: I0218 14:46:22.424478 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" path="/var/lib/kubelet/pods/b797d4b2-d333-4327-b9e7-f4eeec12ae1d/volumes" Feb 18 14:46:29 crc kubenswrapper[4739]: I0218 14:46:29.372732 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:29 crc kubenswrapper[4739]: I0218 14:46:29.373424 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.552849 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553826 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-content" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553840 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-content" Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553876 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-utilities" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553884 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-utilities" Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553919 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.554126 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.556065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.577518 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.713751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.713813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.714012 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.817335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.817333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.841887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.875066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:40 crc kubenswrapper[4739]: W0218 14:46:40.416805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod921cb713_1271_40ce_a50a_3444603bbb32.slice/crio-c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93 WatchSource:0}: Error finding container c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93: Status 404 returned error can't find the container with id c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93 Feb 18 14:46:40 crc kubenswrapper[4739]: I0218 14:46:40.425238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:40 crc kubenswrapper[4739]: I0218 14:46:40.547103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93"} Feb 18 14:46:41 crc kubenswrapper[4739]: I0218 14:46:41.557351 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf" exitCode=0 Feb 18 14:46:41 crc kubenswrapper[4739]: I0218 14:46:41.557401 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf"} Feb 18 14:46:42 crc kubenswrapper[4739]: I0218 14:46:42.570164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb"} Feb 18 14:46:43 crc kubenswrapper[4739]: I0218 14:46:43.590567 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb" exitCode=0 Feb 18 14:46:43 crc kubenswrapper[4739]: I0218 14:46:43.590677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb"} Feb 18 14:46:44 crc kubenswrapper[4739]: I0218 14:46:44.605868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a"} Feb 18 14:46:44 crc kubenswrapper[4739]: I0218 14:46:44.627196 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74fzf" podStartSLOduration=3.160522045 podStartE2EDuration="5.62718003s" podCreationTimestamp="2026-02-18 14:46:39 +0000 UTC" firstStartedPulling="2026-02-18 14:46:41.559364033 +0000 UTC m=+2834.055084955" lastFinishedPulling="2026-02-18 14:46:44.026022018 +0000 UTC m=+2836.521742940" observedRunningTime="2026-02-18 14:46:44.624595845 +0000 UTC m=+2837.120316787" watchObservedRunningTime="2026-02-18 14:46:44.62718003 +0000 UTC m=+2837.122900952" Feb 18 14:46:48 crc kubenswrapper[4739]: I0218 14:46:48.649804 4739 generic.go:334] "Generic (PLEG): container finished" podID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerID="fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa" exitCode=0 Feb 18 14:46:48 crc kubenswrapper[4739]: I0218 14:46:48.649886 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerDied","Data":"fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa"} Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.876390 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.876918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.935060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.160645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.179878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.228987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6" (OuterVolumeSpecName: "kube-api-access-zw5d6") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "kube-api-access-zw5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.229044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.240800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.241757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.244217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory" (OuterVolumeSpecName: "inventory") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.249503 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.266494 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283477 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283522 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283538 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283552 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283565 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283578 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283589 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerDied","Data":"fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200"} Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674724 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674364 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.759137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.790998 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:50 crc kubenswrapper[4739]: E0218 14:46:50.791994 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.792030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.792467 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.793803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.795810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796731 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.797134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799474 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799782 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.800059 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.800296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.832702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.856861 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.900914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.908024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.908433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.910315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.921997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:51 crc kubenswrapper[4739]: I0218 14:46:51.124961 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:51 crc kubenswrapper[4739]: I0218 14:46:51.676740 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:52 crc kubenswrapper[4739]: I0218 14:46:52.697764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerStarted","Data":"657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799"} Feb 18 14:46:52 crc kubenswrapper[4739]: I0218 14:46:52.697832 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74fzf" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" containerID="cri-o://1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" gracePeriod=2 Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.712958 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" exitCode=0 Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.713033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a"} Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.716083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerStarted","Data":"96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c"} Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.738267 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" podStartSLOduration=2.910134845 podStartE2EDuration="3.738246837s" podCreationTimestamp="2026-02-18 14:46:50 +0000 UTC" firstStartedPulling="2026-02-18 14:46:51.689270609 +0000 UTC m=+2844.184991532" lastFinishedPulling="2026-02-18 14:46:52.517382602 +0000 UTC m=+2845.013103524" observedRunningTime="2026-02-18 14:46:53.730396582 +0000 UTC m=+2846.226117504" watchObservedRunningTime="2026-02-18 14:46:53.738246837 +0000 UTC m=+2846.233967759" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.881554 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991827 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991995 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.993070 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities" (OuterVolumeSpecName: "utilities") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.993429 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.997222 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2" (OuterVolumeSpecName: "kube-api-access-gdkm2") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "kube-api-access-gdkm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.023490 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.095645 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.095690 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.729790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93"} Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.729853 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.731202 4739 scope.go:117] "RemoveContainer" containerID="1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.763780 4739 scope.go:117] "RemoveContainer" containerID="694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.766266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.778746 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.788304 4739 scope.go:117] "RemoveContainer" containerID="cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf" Feb 18 14:46:56 crc kubenswrapper[4739]: I0218 14:46:56.428660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921cb713-1271-40ce-a50a-3444603bbb32" path="/var/lib/kubelet/pods/921cb713-1271-40ce-a50a-3444603bbb32/volumes" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.372407 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.373012 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.373064 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.374588 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.374662 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" gracePeriod=600 Feb 18 14:46:59 crc kubenswrapper[4739]: E0218 14:46:59.495913 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787383 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" exitCode=0 Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787799 4739 scope.go:117] "RemoveContainer" containerID="9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.788766 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:46:59 crc kubenswrapper[4739]: E0218 14:46:59.789048 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.266389 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267537 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-utilities" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-utilities" Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267576 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267585 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267603 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-content" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267610 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-content" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267840 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.274300 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.283104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.446523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.447198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.447482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.549732 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.555549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.578195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.605932 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.255557 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760435 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" exitCode=0 Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585"} Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"dfb7ab8d99e11afa8c16b02a02b45beccad5fc8fd5bcfc3c1c5972ab28167d30"} Feb 18 14:47:11 crc kubenswrapper[4739]: I0218 14:47:11.777138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.412307 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:14 crc kubenswrapper[4739]: E0218 14:47:14.412969 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.817275 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" exitCode=0 Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.817327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} Feb 18 14:47:15 crc kubenswrapper[4739]: I0218 14:47:15.832918 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} Feb 18 14:47:15 crc kubenswrapper[4739]: I0218 14:47:15.856866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4dt7" podStartSLOduration=2.411639476 podStartE2EDuration="6.856849052s" podCreationTimestamp="2026-02-18 14:47:09 +0000 UTC" firstStartedPulling="2026-02-18 14:47:10.763985694 +0000 UTC m=+2863.259706616" lastFinishedPulling="2026-02-18 14:47:15.20919527 +0000 UTC m=+2867.704916192" observedRunningTime="2026-02-18 14:47:15.853475997 +0000 UTC m=+2868.349196929" watchObservedRunningTime="2026-02-18 14:47:15.856849052 +0000 UTC m=+2868.352569994" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.606993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.607581 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.663241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:25 crc kubenswrapper[4739]: I0218 14:47:25.411069 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:25 crc kubenswrapper[4739]: E0218 14:47:25.411978 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.663781 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.725688 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:29 crc kubenswrapper[4739]: E0218 14:47:29.878589 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.971304 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4dt7" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" containerID="cri-o://758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" gracePeriod=2 Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.494307 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.594307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.594905 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.595218 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities" (OuterVolumeSpecName: "utilities") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.596217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.597590 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.602123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl" (OuterVolumeSpecName: "kube-api-access-vbpbl") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "kube-api-access-vbpbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.661779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.700018 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.700050 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983425 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" exitCode=0 Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983490 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"dfb7ab8d99e11afa8c16b02a02b45beccad5fc8fd5bcfc3c1c5972ab28167d30"} Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983544 4739 scope.go:117] "RemoveContainer" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983709 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.010107 4739 scope.go:117] "RemoveContainer" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.030978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.041317 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.048934 4739 scope.go:117] "RemoveContainer" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.098398 4739 scope.go:117] "RemoveContainer" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.098935 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": container with ID starting with 758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b not found: ID does not exist" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.098976 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} err="failed to get container status \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": rpc error: code = NotFound desc = could not find container \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": container with ID starting with 758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b not found: ID does not exist" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099001 4739 scope.go:117] "RemoveContainer" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.099271 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": container with ID starting with 7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0 not found: ID does not exist" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099297 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} err="failed to get container status \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": rpc error: code = NotFound desc = could not find container \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": container with ID starting with 7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0 not found: ID does not exist" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099315 4739 scope.go:117] "RemoveContainer" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.099592 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": container with ID starting with 3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585 not found: ID does not exist" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099619 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585"} err="failed to get container status \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": rpc error: code = NotFound desc = could not find container \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": container with ID starting with 3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585 not found: ID does not exist" Feb 18 14:47:32 crc kubenswrapper[4739]: I0218 14:47:32.425348 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" path="/var/lib/kubelet/pods/31ef9789-e0a5-4ed0-a546-641aac5b15df/volumes" Feb 18 14:47:39 crc kubenswrapper[4739]: I0218 14:47:39.411167 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:39 crc kubenswrapper[4739]: E0218 14:47:39.412025 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:53 crc kubenswrapper[4739]: I0218 14:47:53.411615 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:53 crc kubenswrapper[4739]: E0218 14:47:53.412831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:06 crc kubenswrapper[4739]: I0218 14:48:06.410996 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:06 crc kubenswrapper[4739]: E0218 14:48:06.411965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:19 crc kubenswrapper[4739]: I0218 14:48:19.410775 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:19 crc kubenswrapper[4739]: E0218 14:48:19.411660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:33 crc kubenswrapper[4739]: I0218 14:48:33.414940 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:33 crc kubenswrapper[4739]: E0218 14:48:33.416006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:36 crc kubenswrapper[4739]: I0218 14:48:36.691647 4739 generic.go:334] "Generic (PLEG): container finished" podID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerID="96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c" exitCode=0 Feb 18 14:48:36 crc kubenswrapper[4739]: I0218 14:48:36.691782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerDied","Data":"96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c"} Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.196511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317257 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.326392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.331004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v" (OuterVolumeSpecName: "kube-api-access-mzt8v") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "kube-api-access-mzt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.354482 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.357288 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.359815 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.373968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.374917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory" (OuterVolumeSpecName: "inventory") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422305 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422350 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422366 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422380 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422396 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422409 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422420 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerDied","Data":"657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799"} Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717858 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717597 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815057 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-utilities" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815701 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-utilities" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815733 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-content" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815742 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-content" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815766 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815775 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815796 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815807 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.816095 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.816130 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.817179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.819334 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.819746 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821303 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.829099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.935784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936108 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039931 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.040034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.045762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.046085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.046949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.047936 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.065924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.141905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.719101 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.726533 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.739751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerStarted","Data":"bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed"} Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.740074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerStarted","Data":"baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e"} Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.762136 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" podStartSLOduration=2.173603483 podStartE2EDuration="2.762113348s" podCreationTimestamp="2026-02-18 14:48:38 +0000 UTC" firstStartedPulling="2026-02-18 14:48:39.726254231 +0000 UTC m=+2952.221975153" lastFinishedPulling="2026-02-18 14:48:40.314764096 +0000 UTC m=+2952.810485018" observedRunningTime="2026-02-18 14:48:40.75932408 +0000 UTC m=+2953.255045012" watchObservedRunningTime="2026-02-18 14:48:40.762113348 +0000 UTC m=+2953.257834290" Feb 18 14:48:47 crc kubenswrapper[4739]: I0218 14:48:47.411413 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:47 crc kubenswrapper[4739]: E0218 14:48:47.412338 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:54 crc kubenswrapper[4739]: I0218 14:48:54.922060 4739 generic.go:334] "Generic (PLEG): container finished" podID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerID="bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed" exitCode=0 Feb 18 14:48:54 crc kubenswrapper[4739]: I0218 14:48:54.922165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerDied","Data":"bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed"} Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.447707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.650564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.650950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651092 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651165 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651255 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.656486 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c" (OuterVolumeSpecName: "kube-api-access-wn74c") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "kube-api-access-wn74c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.680958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory" (OuterVolumeSpecName: "inventory") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.682667 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.687596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.697562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.755979 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756029 4739 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756046 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756597 4739 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.948970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerDied","Data":"baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e"} Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.949049 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.949051 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:58 crc kubenswrapper[4739]: I0218 14:48:58.410545 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:58 crc kubenswrapper[4739]: E0218 14:48:58.411229 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:09 crc kubenswrapper[4739]: I0218 14:49:09.411309 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:09 crc kubenswrapper[4739]: E0218 14:49:09.412513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:23 crc kubenswrapper[4739]: I0218 14:49:23.410287 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:23 crc kubenswrapper[4739]: E0218 14:49:23.411233 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:35 crc kubenswrapper[4739]: I0218 14:49:35.411212 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:35 crc kubenswrapper[4739]: E0218 14:49:35.411924 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:46 crc kubenswrapper[4739]: I0218 14:49:46.411043 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:46 crc kubenswrapper[4739]: E0218 14:49:46.411861 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:00 crc kubenswrapper[4739]: I0218 14:50:00.419330 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:00 crc kubenswrapper[4739]: E0218 14:50:00.420795 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:13 crc kubenswrapper[4739]: I0218 14:50:13.413227 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:13 crc kubenswrapper[4739]: E0218 14:50:13.416161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:25 crc kubenswrapper[4739]: I0218 14:50:25.412810 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:25 crc kubenswrapper[4739]: E0218 14:50:25.413980 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:40 crc kubenswrapper[4739]: I0218 14:50:40.411486 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:40 crc kubenswrapper[4739]: E0218 14:50:40.412317 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:52 crc kubenswrapper[4739]: I0218 14:50:52.411414 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:52 crc kubenswrapper[4739]: E0218 14:50:52.412316 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:05 crc kubenswrapper[4739]: I0218 14:51:05.411202 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:05 crc kubenswrapper[4739]: E0218 14:51:05.412051 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:16 crc kubenswrapper[4739]: I0218 14:51:16.411177 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:16 crc kubenswrapper[4739]: E0218 14:51:16.411897 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:31 crc kubenswrapper[4739]: I0218 14:51:31.410987 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:31 crc kubenswrapper[4739]: E0218 14:51:31.411831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:46 crc kubenswrapper[4739]: I0218 14:51:46.411587 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:46 crc kubenswrapper[4739]: E0218 14:51:46.412596 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:59 crc kubenswrapper[4739]: I0218 14:51:59.411142 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:59 crc kubenswrapper[4739]: I0218 14:51:59.941699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} Feb 18 14:53:59 crc kubenswrapper[4739]: I0218 14:53:59.372275 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:53:59 crc kubenswrapper[4739]: I0218 14:53:59.372904 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:29 crc kubenswrapper[4739]: I0218 14:54:29.372882 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:54:29 crc kubenswrapper[4739]: I0218 14:54:29.373431 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372371 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372932 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372985 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.373857 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.373923 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" gracePeriod=600 Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879196 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" exitCode=0 Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879821 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879846 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.464274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:15 crc kubenswrapper[4739]: E0218 14:55:15.465484 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.465505 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.465807 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.468705 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.480730 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527411 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.629974 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630100 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630715 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630766 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.675072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.789704 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:16 crc kubenswrapper[4739]: I0218 14:55:16.349486 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090042 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" exitCode=0 Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d"} Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"d7c1c1de4dc9d0ec875b26c0571eaea101b3e4818dfa0680d7d39593a5b81682"} Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.094470 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:55:20 crc kubenswrapper[4739]: I0218 14:55:20.131144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} Feb 18 14:55:26 crc kubenswrapper[4739]: I0218 14:55:26.204710 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" exitCode=0 Feb 18 14:55:26 crc kubenswrapper[4739]: I0218 14:55:26.204746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} Feb 18 14:55:27 crc kubenswrapper[4739]: I0218 14:55:27.218343 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} Feb 18 14:55:27 crc kubenswrapper[4739]: I0218 14:55:27.247838 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jw2pk" podStartSLOduration=2.705990508 podStartE2EDuration="12.247820383s" podCreationTimestamp="2026-02-18 14:55:15 +0000 UTC" firstStartedPulling="2026-02-18 14:55:17.09419291 +0000 UTC m=+3349.589913832" lastFinishedPulling="2026-02-18 14:55:26.636022785 +0000 UTC m=+3359.131743707" observedRunningTime="2026-02-18 14:55:27.23856055 +0000 UTC m=+3359.734281502" watchObservedRunningTime="2026-02-18 14:55:27.247820383 +0000 UTC m=+3359.743541305" Feb 18 14:55:35 crc kubenswrapper[4739]: I0218 14:55:35.789962 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:35 crc kubenswrapper[4739]: I0218 14:55:35.790741 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:36 crc kubenswrapper[4739]: I0218 14:55:36.845760 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:36 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:36 crc kubenswrapper[4739]: > Feb 18 14:55:46 crc kubenswrapper[4739]: I0218 14:55:46.847591 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:46 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:46 crc kubenswrapper[4739]: > Feb 18 14:55:56 crc kubenswrapper[4739]: I0218 14:55:56.844601 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:56 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:56 crc kubenswrapper[4739]: > Feb 18 14:56:05 crc kubenswrapper[4739]: I0218 14:56:05.839725 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:05 crc kubenswrapper[4739]: I0218 14:56:05.900357 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:06 crc kubenswrapper[4739]: I0218 14:56:06.085631 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:07 crc kubenswrapper[4739]: I0218 14:56:07.744968 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" containerID="cri-o://a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" gracePeriod=2 Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.334835 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.486986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.487115 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.487233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.488028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities" (OuterVolumeSpecName: "utilities") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.489265 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.494780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp" (OuterVolumeSpecName: "kube-api-access-kc8wp") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "kube-api-access-kc8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.591377 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.659923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.693975 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757028 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" exitCode=0 Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757115 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"d7c1c1de4dc9d0ec875b26c0571eaea101b3e4818dfa0680d7d39593a5b81682"} Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757153 4739 scope.go:117] "RemoveContainer" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.784955 4739 scope.go:117] "RemoveContainer" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.802246 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.814222 4739 scope.go:117] "RemoveContainer" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.816469 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880036 4739 scope.go:117] "RemoveContainer" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.880755 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": container with ID starting with a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1 not found: ID does not exist" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880833 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} err="failed to get container status \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": rpc error: code = NotFound desc = could not find container \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": container with ID starting with a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1 not found: ID does not exist" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880891 4739 scope.go:117] "RemoveContainer" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.881472 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": container with ID starting with 6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074 not found: ID does not exist" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.881622 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} err="failed to get container status \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": rpc error: code = NotFound desc = could not find container \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": container with ID starting with 6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074 not found: ID does not exist" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.881722 4739 scope.go:117] "RemoveContainer" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.882287 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": container with ID starting with 75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d not found: ID does not exist" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.882325 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d"} err="failed to get container status \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": rpc error: code = NotFound desc = could not find container \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": container with ID starting with 75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d not found: ID does not exist" Feb 18 14:56:10 crc kubenswrapper[4739]: I0218 14:56:10.425283 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" path="/var/lib/kubelet/pods/c65e9c1e-6895-4ddc-b74a-c424fea4c24d/volumes" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.585822 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586726 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-utilities" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586740 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-utilities" Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586756 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586762 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586776 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-content" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586781 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-content" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.587033 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.588731 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.600347 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735886 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838053 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838646 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.839028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.839092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.867143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.919020 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.494840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951038 4739 generic.go:334] "Generic (PLEG): container finished" podID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerID="d68ad1d18d91197ec1f8e84e10ae66569f5c214d767791ef0a18af0cd8d3237b" exitCode=0 Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951148 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerDied","Data":"d68ad1d18d91197ec1f8e84e10ae66569f5c214d767791ef0a18af0cd8d3237b"} Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951345 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerStarted","Data":"0fc713da6fd348d2f8ab44a4aedd5b4a245f74b3dd2f7484d6e521dbc02aed14"} Feb 18 14:56:36 crc kubenswrapper[4739]: I0218 14:56:36.032514 4739 generic.go:334] "Generic (PLEG): container finished" podID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerID="3db33673d7628b06fbaba06cd09a2912b4ea7614af78f4dbe9f17b3b037b7284" exitCode=0 Feb 18 14:56:36 crc kubenswrapper[4739]: I0218 14:56:36.033044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerDied","Data":"3db33673d7628b06fbaba06cd09a2912b4ea7614af78f4dbe9f17b3b037b7284"} Feb 18 14:56:37 crc kubenswrapper[4739]: I0218 14:56:37.047857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerStarted","Data":"720086e6b307316c40afce7265cd05ecc4ba0790375e277ad74c6aaad6364bed"} Feb 18 14:56:37 crc kubenswrapper[4739]: I0218 14:56:37.086974 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmqk2" podStartSLOduration=2.60712645 podStartE2EDuration="11.086952558s" podCreationTimestamp="2026-02-18 14:56:26 +0000 UTC" firstStartedPulling="2026-02-18 14:56:27.953089541 +0000 UTC m=+3420.448810463" lastFinishedPulling="2026-02-18 14:56:36.432915649 +0000 UTC m=+3428.928636571" observedRunningTime="2026-02-18 14:56:37.076218618 +0000 UTC m=+3429.571939550" watchObservedRunningTime="2026-02-18 14:56:37.086952558 +0000 UTC m=+3429.582673480" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.920350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.921345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.968929 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.211123 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.282648 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.319476 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.319707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94tzm" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" containerID="cri-o://07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" gracePeriod=2 Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.865974 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.966727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.966959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.967011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.968165 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities" (OuterVolumeSpecName: "utilities") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.978286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff" (OuterVolumeSpecName: "kube-api-access-lvzff") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "kube-api-access-lvzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.039391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070042 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070333 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070344 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.174542 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" exitCode=0 Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.175288 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c"} Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180455 4739 scope.go:117] "RemoveContainer" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.238252 4739 scope.go:117] "RemoveContainer" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.250714 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.277757 4739 scope.go:117] "RemoveContainer" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.300545 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.301770 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice/crio-9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.302188 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice/crio-9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.334573 4739 scope.go:117] "RemoveContainer" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.335443 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": container with ID starting with 07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689 not found: ID does not exist" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.339625 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} err="failed to get container status \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": rpc error: code = NotFound desc = could not find container \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": container with ID starting with 07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689 not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.339662 4739 scope.go:117] "RemoveContainer" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.340583 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": container with ID starting with 20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548 not found: ID does not exist" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.340654 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548"} err="failed to get container status \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": rpc error: code = NotFound desc = could not find container \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": container with ID starting with 20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548 not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.340687 4739 scope.go:117] "RemoveContainer" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.346297 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": container with ID starting with 18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f not found: ID does not exist" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.346362 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f"} err="failed to get container status \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": rpc error: code = NotFound desc = could not find container \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": container with ID starting with 18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.429788 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" path="/var/lib/kubelet/pods/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f/volumes" Feb 18 14:56:59 crc kubenswrapper[4739]: I0218 14:56:59.372675 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:56:59 crc kubenswrapper[4739]: I0218 14:56:59.373156 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:29 crc kubenswrapper[4739]: I0218 14:57:29.372670 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:57:29 crc kubenswrapper[4739]: I0218 14:57:29.373488 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.006640 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.008804 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-content" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.008921 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-content" Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.009018 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009124 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.009228 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-utilities" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-utilities" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009988 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.014061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.030938 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150472 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.151405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.151416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.173636 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.341133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.882310 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812414 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" exitCode=0 Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812511 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68"} Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"2b9fcd42928c5809815eece70ea95f15c39ba7a2d4e00ab4a2c466ec692b62f7"} Feb 18 14:57:49 crc kubenswrapper[4739]: I0218 14:57:49.831344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} Feb 18 14:57:50 crc kubenswrapper[4739]: I0218 14:57:50.845211 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" exitCode=0 Feb 18 14:57:50 crc kubenswrapper[4739]: I0218 14:57:50.845309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} Feb 18 14:57:51 crc kubenswrapper[4739]: I0218 14:57:51.862091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} Feb 18 14:57:51 crc kubenswrapper[4739]: I0218 14:57:51.886633 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glcp4" podStartSLOduration=3.405724562 podStartE2EDuration="5.886613481s" podCreationTimestamp="2026-02-18 14:57:46 +0000 UTC" firstStartedPulling="2026-02-18 14:57:48.815597241 +0000 UTC m=+3501.311318183" lastFinishedPulling="2026-02-18 14:57:51.29648618 +0000 UTC m=+3503.792207102" observedRunningTime="2026-02-18 14:57:51.877936383 +0000 UTC m=+3504.373657305" watchObservedRunningTime="2026-02-18 14:57:51.886613481 +0000 UTC m=+3504.382334413" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.342011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.342417 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.391217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.973994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:58 crc kubenswrapper[4739]: I0218 14:57:58.030165 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373100 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373173 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373225 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.374080 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.374135 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" gracePeriod=600 Feb 18 14:57:59 crc kubenswrapper[4739]: E0218 14:57:59.491593 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943101 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" exitCode=0 Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943670 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glcp4" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" containerID="cri-o://3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" gracePeriod=2 Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943793 4739 scope.go:117] "RemoveContainer" containerID="626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.944794 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:57:59 crc kubenswrapper[4739]: E0218 14:57:59.945271 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.486738 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575370 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.576937 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities" (OuterVolumeSpecName: "utilities") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.582791 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz" (OuterVolumeSpecName: "kube-api-access-7bssz") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "kube-api-access-7bssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.601348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679243 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679509 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955198 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" exitCode=0 Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955296 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955423 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"2b9fcd42928c5809815eece70ea95f15c39ba7a2d4e00ab4a2c466ec692b62f7"} Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955466 4739 scope.go:117] "RemoveContainer" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.983459 4739 scope.go:117] "RemoveContainer" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.999190 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.011771 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.029948 4739 scope.go:117] "RemoveContainer" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068111 4739 scope.go:117] "RemoveContainer" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.068875 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": container with ID starting with 3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16 not found: ID does not exist" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068927 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} err="failed to get container status \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": rpc error: code = NotFound desc = could not find container \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": container with ID starting with 3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16 not found: ID does not exist" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068962 4739 scope.go:117] "RemoveContainer" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.069243 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": container with ID starting with bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b not found: ID does not exist" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069328 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} err="failed to get container status \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": rpc error: code = NotFound desc = could not find container \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": container with ID starting with bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b not found: ID does not exist" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069408 4739 scope.go:117] "RemoveContainer" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.069738 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": container with ID starting with 4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68 not found: ID does not exist" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069772 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68"} err="failed to get container status \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": rpc error: code = NotFound desc = could not find container \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": container with ID starting with 4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68 not found: ID does not exist" Feb 18 14:58:02 crc kubenswrapper[4739]: I0218 14:58:02.425479 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" path="/var/lib/kubelet/pods/6a4f0075-4eb5-40d8-918f-26e4975b18e0/volumes" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.693398 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694295 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-content" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694307 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-content" Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694337 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-utilities" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694345 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-utilities" Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694772 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.696394 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.723302 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931866 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.932421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.932544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.957500 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:11 crc kubenswrapper[4739]: I0218 14:58:11.021537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:11 crc kubenswrapper[4739]: I0218 14:58:11.585847 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134093 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" exitCode=0 Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134188 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50"} Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"c535dbd06706ccfeb23e10164229142ffb3298e7e455d6c293695b16f23adee8"} Feb 18 14:58:13 crc kubenswrapper[4739]: I0218 14:58:13.146364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.168632 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" exitCode=0 Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.168985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.410967 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:15 crc kubenswrapper[4739]: E0218 14:58:15.411382 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:16 crc kubenswrapper[4739]: I0218 14:58:16.180753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} Feb 18 14:58:16 crc kubenswrapper[4739]: I0218 14:58:16.198719 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvz72" podStartSLOduration=2.789859708 podStartE2EDuration="6.1986987s" podCreationTimestamp="2026-02-18 14:58:10 +0000 UTC" firstStartedPulling="2026-02-18 14:58:12.136475324 +0000 UTC m=+3524.632196256" lastFinishedPulling="2026-02-18 14:58:15.545314326 +0000 UTC m=+3528.041035248" observedRunningTime="2026-02-18 14:58:16.197798118 +0000 UTC m=+3528.693519060" watchObservedRunningTime="2026-02-18 14:58:16.1986987 +0000 UTC m=+3528.694419622" Feb 18 14:58:21 crc kubenswrapper[4739]: I0218 14:58:21.022430 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:21 crc kubenswrapper[4739]: I0218 14:58:21.022876 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:22 crc kubenswrapper[4739]: I0218 14:58:22.085692 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xvz72" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" probeResult="failure" output=< Feb 18 14:58:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:58:22 crc kubenswrapper[4739]: > Feb 18 14:58:29 crc kubenswrapper[4739]: I0218 14:58:29.410824 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:29 crc kubenswrapper[4739]: E0218 14:58:29.411929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.102328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.160739 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.344664 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:32 crc kubenswrapper[4739]: I0218 14:58:32.357048 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvz72" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" containerID="cri-o://f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" gracePeriod=2 Feb 18 14:58:32 crc kubenswrapper[4739]: I0218 14:58:32.951434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.072840 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073859 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities" (OuterVolumeSpecName: "utilities") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.075133 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.080118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96" (OuterVolumeSpecName: "kube-api-access-pvf96") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "kube-api-access-pvf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.123978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.177210 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.177247 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371384 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" exitCode=0 Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371477 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371505 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"c535dbd06706ccfeb23e10164229142ffb3298e7e455d6c293695b16f23adee8"} Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371526 4739 scope.go:117] "RemoveContainer" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.399601 4739 scope.go:117] "RemoveContainer" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.411376 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.421994 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.438499 4739 scope.go:117] "RemoveContainer" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.479944 4739 scope.go:117] "RemoveContainer" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.480584 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": container with ID starting with f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308 not found: ID does not exist" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.480626 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} err="failed to get container status \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": rpc error: code = NotFound desc = could not find container \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": container with ID starting with f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308 not found: ID does not exist" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.480653 4739 scope.go:117] "RemoveContainer" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.483308 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": container with ID starting with 6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537 not found: ID does not exist" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483346 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} err="failed to get container status \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": rpc error: code = NotFound desc = could not find container \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": container with ID starting with 6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537 not found: ID does not exist" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483376 4739 scope.go:117] "RemoveContainer" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.483768 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": container with ID starting with f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50 not found: ID does not exist" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483815 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50"} err="failed to get container status \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": rpc error: code = NotFound desc = could not find container \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": container with ID starting with f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50 not found: ID does not exist" Feb 18 14:58:34 crc kubenswrapper[4739]: I0218 14:58:34.422945 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" path="/var/lib/kubelet/pods/7d2e5425-8a4c-4e24-ab8a-310311b52e64/volumes" Feb 18 14:58:41 crc kubenswrapper[4739]: I0218 14:58:41.411330 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:41 crc kubenswrapper[4739]: E0218 14:58:41.412365 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:53 crc kubenswrapper[4739]: I0218 14:58:53.410562 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:53 crc kubenswrapper[4739]: E0218 14:58:53.411419 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:05 crc kubenswrapper[4739]: I0218 14:59:05.410794 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:05 crc kubenswrapper[4739]: E0218 14:59:05.411633 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:16 crc kubenswrapper[4739]: I0218 14:59:16.411096 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:16 crc kubenswrapper[4739]: E0218 14:59:16.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:27 crc kubenswrapper[4739]: I0218 14:59:27.410918 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:27 crc kubenswrapper[4739]: E0218 14:59:27.411687 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:40 crc kubenswrapper[4739]: I0218 14:59:40.410582 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:40 crc kubenswrapper[4739]: E0218 14:59:40.411525 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:52 crc kubenswrapper[4739]: I0218 14:59:52.411511 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:52 crc kubenswrapper[4739]: E0218 14:59:52.412310 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.165592 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166698 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-utilities" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166777 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-utilities" Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166802 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-content" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166810 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-content" Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166846 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166856 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.167146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.168027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.171309 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.171623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.177737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.366945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.367120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.367205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.368358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.373614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.386683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.496384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.997026 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.416657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerStarted","Data":"4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b"} Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.417852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerStarted","Data":"a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5"} Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.466182 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" podStartSLOduration=1.466162739 podStartE2EDuration="1.466162739s" podCreationTimestamp="2026-02-18 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:00:01.445758514 +0000 UTC m=+3633.941479436" watchObservedRunningTime="2026-02-18 15:00:01.466162739 +0000 UTC m=+3633.961883661" Feb 18 15:00:02 crc kubenswrapper[4739]: I0218 15:00:02.429307 4739 generic.go:334] "Generic (PLEG): container finished" podID="728976dc-da2b-4408-895b-a95d93c23eaa" containerID="4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b" exitCode=0 Feb 18 15:00:02 crc kubenswrapper[4739]: I0218 15:00:02.429366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerDied","Data":"4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b"} Feb 18 15:00:03 crc kubenswrapper[4739]: I0218 15:00:03.411574 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:03 crc kubenswrapper[4739]: E0218 15:00:03.412227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.024233 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.090346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.090967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.091076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.091990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume" (OuterVolumeSpecName: "config-volume") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.098008 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.098143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg" (OuterVolumeSpecName: "kube-api-access-z5lqg") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "kube-api-access-z5lqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194218 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194279 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194294 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.873974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerDied","Data":"a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5"} Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.874019 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.874079 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:07 crc kubenswrapper[4739]: I0218 15:00:07.106125 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 15:00:07 crc kubenswrapper[4739]: I0218 15:00:07.117043 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 15:00:08 crc kubenswrapper[4739]: I0218 15:00:08.426832 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" path="/var/lib/kubelet/pods/8c2918ab-f9b2-46b1-9895-7de44312e98e/volumes" Feb 18 15:00:18 crc kubenswrapper[4739]: I0218 15:00:18.420929 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:18 crc kubenswrapper[4739]: E0218 15:00:18.421794 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:29 crc kubenswrapper[4739]: I0218 15:00:29.410626 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:29 crc kubenswrapper[4739]: E0218 15:00:29.411411 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:37 crc kubenswrapper[4739]: I0218 15:00:37.844582 4739 scope.go:117] "RemoveContainer" containerID="a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993" Feb 18 15:00:41 crc kubenswrapper[4739]: I0218 15:00:41.411538 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:41 crc kubenswrapper[4739]: E0218 15:00:41.412456 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:52 crc kubenswrapper[4739]: I0218 15:00:52.410804 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:52 crc kubenswrapper[4739]: E0218 15:00:52.411538 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.152680 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:00 crc kubenswrapper[4739]: E0218 15:01:00.153985 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.154004 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.154259 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.155161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.183608 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.251901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.251976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.252119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.252189 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354815 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354985 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.355065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.362123 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.362153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.364708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.373322 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.477015 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.971814 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.458192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerStarted","Data":"143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f"} Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.458531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerStarted","Data":"e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3"} Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.481370 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523781-z64zk" podStartSLOduration=1.48135139 podStartE2EDuration="1.48135139s" podCreationTimestamp="2026-02-18 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:01:01.472796154 +0000 UTC m=+3693.968517086" watchObservedRunningTime="2026-02-18 15:01:01.48135139 +0000 UTC m=+3693.977072312" Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.411044 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:06 crc kubenswrapper[4739]: E0218 15:01:06.411936 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.510549 4739 generic.go:334] "Generic (PLEG): container finished" podID="28825764-dace-4769-b71e-4d55b8aa1d97" containerID="143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f" exitCode=0 Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.510598 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerDied","Data":"143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f"} Feb 18 15:01:07 crc kubenswrapper[4739]: I0218 15:01:07.928869 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.039755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.039851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.040029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.040098 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.045993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9" (OuterVolumeSpecName: "kube-api-access-mczs9") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "kube-api-access-mczs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.046465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.071308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.098981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data" (OuterVolumeSpecName: "config-data") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143159 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143193 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143203 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143213 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerDied","Data":"e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3"} Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530596 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530609 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3" Feb 18 15:01:19 crc kubenswrapper[4739]: I0218 15:01:19.414575 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:19 crc kubenswrapper[4739]: E0218 15:01:19.415965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:33 crc kubenswrapper[4739]: I0218 15:01:33.411365 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:33 crc kubenswrapper[4739]: E0218 15:01:33.412196 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:44 crc kubenswrapper[4739]: I0218 15:01:44.411169 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:44 crc kubenswrapper[4739]: E0218 15:01:44.411989 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:56 crc kubenswrapper[4739]: I0218 15:01:56.411964 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:56 crc kubenswrapper[4739]: E0218 15:01:56.415168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:09 crc kubenswrapper[4739]: I0218 15:02:09.410257 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:09 crc kubenswrapper[4739]: E0218 15:02:09.410981 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:23 crc kubenswrapper[4739]: I0218 15:02:23.410904 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:23 crc kubenswrapper[4739]: E0218 15:02:23.411870 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:38 crc kubenswrapper[4739]: I0218 15:02:38.437248 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:38 crc kubenswrapper[4739]: E0218 15:02:38.438283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:51 crc kubenswrapper[4739]: I0218 15:02:51.412346 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:51 crc kubenswrapper[4739]: E0218 15:02:51.413789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:03:06 crc kubenswrapper[4739]: I0218 15:03:06.410358 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:03:07 crc kubenswrapper[4739]: I0218 15:03:07.701576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} Feb 18 15:05:29 crc kubenswrapper[4739]: I0218 15:05:29.375030 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:29 crc kubenswrapper[4739]: I0218 15:05:29.375568 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:05:59 crc kubenswrapper[4739]: I0218 15:05:59.373543 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:59 crc kubenswrapper[4739]: I0218 15:05:59.374366 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.373426 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.374071 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.374127 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.375153 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.375225 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" gracePeriod=600 Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891331 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" exitCode=0 Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891683 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.544877 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:48 crc kubenswrapper[4739]: E0218 15:06:48.546112 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.546126 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.546356 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.548056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.561957 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.777520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.777575 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.805378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.874557 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:49 crc kubenswrapper[4739]: I0218 15:06:49.435530 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.126434 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" exitCode=0 Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.126628 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9"} Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.127623 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"ca3bb1bb9a12bbf83e9057bedfb179dab26a7c891d3e1272b80769f2b56ebde9"} Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.128939 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:06:51 crc kubenswrapper[4739]: I0218 15:06:51.139868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} Feb 18 15:06:57 crc kubenswrapper[4739]: I0218 15:06:57.199923 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" exitCode=0 Feb 18 15:06:57 crc kubenswrapper[4739]: I0218 15:06:57.199956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.211123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.233257 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chrlt" podStartSLOduration=2.45058507 podStartE2EDuration="10.233239812s" podCreationTimestamp="2026-02-18 15:06:48 +0000 UTC" firstStartedPulling="2026-02-18 15:06:50.128735091 +0000 UTC m=+4042.624456003" lastFinishedPulling="2026-02-18 15:06:57.911389823 +0000 UTC m=+4050.407110745" observedRunningTime="2026-02-18 15:06:58.230052182 +0000 UTC m=+4050.725773104" watchObservedRunningTime="2026-02-18 15:06:58.233239812 +0000 UTC m=+4050.728960734" Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.875712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.875812 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:59 crc kubenswrapper[4739]: I0218 15:06:59.930790 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:06:59 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:06:59 crc kubenswrapper[4739]: > Feb 18 15:07:09 crc kubenswrapper[4739]: I0218 15:07:09.929284 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:09 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:09 crc kubenswrapper[4739]: > Feb 18 15:07:19 crc kubenswrapper[4739]: I0218 15:07:19.929471 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:19 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:19 crc kubenswrapper[4739]: > Feb 18 15:07:29 crc kubenswrapper[4739]: I0218 15:07:29.927486 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:29 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:29 crc kubenswrapper[4739]: > Feb 18 15:07:38 crc kubenswrapper[4739]: I0218 15:07:38.935831 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:38 crc kubenswrapper[4739]: I0218 15:07:38.999412 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:39 crc kubenswrapper[4739]: I0218 15:07:39.176573 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:40 crc kubenswrapper[4739]: I0218 15:07:40.684684 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" containerID="cri-o://7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" gracePeriod=2 Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.231060 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301431 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.302026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities" (OuterVolumeSpecName: "utilities") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.309856 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj" (OuterVolumeSpecName: "kube-api-access-5v4sj") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "kube-api-access-5v4sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.311032 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.311079 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.421502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.515022 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700865 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" exitCode=0 Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700931 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.701032 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"ca3bb1bb9a12bbf83e9057bedfb179dab26a7c891d3e1272b80769f2b56ebde9"} Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.701078 4739 scope.go:117] "RemoveContainer" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.737434 4739 scope.go:117] "RemoveContainer" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.744100 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.754173 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.318693 4739 scope.go:117] "RemoveContainer" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.375968 4739 scope.go:117] "RemoveContainer" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.376407 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": container with ID starting with 7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93 not found: ID does not exist" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376483 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} err="failed to get container status \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": rpc error: code = NotFound desc = could not find container \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": container with ID starting with 7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93 not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376520 4739 scope.go:117] "RemoveContainer" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.376850 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": container with ID starting with 018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c not found: ID does not exist" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376881 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} err="failed to get container status \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": rpc error: code = NotFound desc = could not find container \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": container with ID starting with 018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376908 4739 scope.go:117] "RemoveContainer" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.377309 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": container with ID starting with e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9 not found: ID does not exist" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.377354 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9"} err="failed to get container status \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": rpc error: code = NotFound desc = could not find container \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": container with ID starting with e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9 not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.424158 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" path="/var/lib/kubelet/pods/164a424c-e71a-43f3-9f77-bb4fe38a744d/volumes" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.884561 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885768 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-utilities" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885787 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-utilities" Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885820 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-content" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885829 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-content" Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885876 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.886194 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.888484 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.911397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.014655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.014933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.015058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.119120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.119412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.142901 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.216611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.844866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.855773 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa" exitCode=0 Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.856081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa"} Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.856108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"ccd22c0d61af81fe07eb21bd9c5f5fb8121e67c49af5486738745ffcb1a098b6"} Feb 18 15:07:55 crc kubenswrapper[4739]: I0218 15:07:55.869308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79"} Feb 18 15:07:56 crc kubenswrapper[4739]: I0218 15:07:56.880269 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79" exitCode=0 Feb 18 15:07:56 crc kubenswrapper[4739]: I0218 15:07:56.880532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79"} Feb 18 15:07:57 crc kubenswrapper[4739]: I0218 15:07:57.894306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633"} Feb 18 15:07:57 crc kubenswrapper[4739]: I0218 15:07:57.921632 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9phw" podStartSLOduration=3.399007793 podStartE2EDuration="5.921601946s" podCreationTimestamp="2026-02-18 15:07:52 +0000 UTC" firstStartedPulling="2026-02-18 15:07:54.858939226 +0000 UTC m=+4107.354660138" lastFinishedPulling="2026-02-18 15:07:57.381533369 +0000 UTC m=+4109.877254291" observedRunningTime="2026-02-18 15:07:57.91345814 +0000 UTC m=+4110.409179082" watchObservedRunningTime="2026-02-18 15:07:57.921601946 +0000 UTC m=+4110.417322868" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.217493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.218060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.269565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:04 crc kubenswrapper[4739]: I0218 15:08:04.441509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:04 crc kubenswrapper[4739]: I0218 15:08:04.499223 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:05 crc kubenswrapper[4739]: I0218 15:08:05.971601 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h9phw" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" containerID="cri-o://46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" gracePeriod=2 Feb 18 15:08:06 crc kubenswrapper[4739]: E0218 15:08:06.316521 4739 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:59324->38.102.83.80:36701: write tcp 38.102.83.80:59324->38.102.83.80:36701: write: broken pipe Feb 18 15:08:06 crc kubenswrapper[4739]: I0218 15:08:06.984607 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" exitCode=0 Feb 18 15:08:06 crc kubenswrapper[4739]: I0218 15:08:06.984647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633"} Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.452234 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607058 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.608416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities" (OuterVolumeSpecName: "utilities") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.614973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx" (OuterVolumeSpecName: "kube-api-access-24msx") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "kube-api-access-24msx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.667415 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711021 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711077 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711094 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.996860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"ccd22c0d61af81fe07eb21bd9c5f5fb8121e67c49af5486738745ffcb1a098b6"} Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.997134 4739 scope.go:117] "RemoveContainer" containerID="46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.996897 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.021141 4739 scope.go:117] "RemoveContainer" containerID="d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.036260 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.047310 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.059993 4739 scope.go:117] "RemoveContainer" containerID="f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.423484 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" path="/var/lib/kubelet/pods/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4/volumes" Feb 18 15:08:29 crc kubenswrapper[4739]: I0218 15:08:29.372848 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:29 crc kubenswrapper[4739]: I0218 15:08:29.373363 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:08:59 crc kubenswrapper[4739]: I0218 15:08:59.372291 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:59 crc kubenswrapper[4739]: I0218 15:08:59.372873 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.536698 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537892 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-utilities" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537908 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-utilities" Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537918 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-content" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-content" Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537971 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.538227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.540506 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.552537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640234 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743525 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.744105 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.763276 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.870312 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.454889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.800632 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" exitCode=0 Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.800804 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee"} Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.801006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"dee91aaa97ae41d79d63105d8dd698fcc24ad6d895bd98e68ce5202a805eaeec"} Feb 18 15:09:21 crc kubenswrapper[4739]: I0218 15:09:21.823213 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} Feb 18 15:09:23 crc kubenswrapper[4739]: I0218 15:09:23.844935 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" exitCode=0 Feb 18 15:09:23 crc kubenswrapper[4739]: I0218 15:09:23.844985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} Feb 18 15:09:24 crc kubenswrapper[4739]: I0218 15:09:24.862083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} Feb 18 15:09:24 crc kubenswrapper[4739]: I0218 15:09:24.887933 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85jxq" podStartSLOduration=2.4415700830000002 podStartE2EDuration="6.887912639s" podCreationTimestamp="2026-02-18 15:09:18 +0000 UTC" firstStartedPulling="2026-02-18 15:09:19.803975244 +0000 UTC m=+4192.299696166" lastFinishedPulling="2026-02-18 15:09:24.2503178 +0000 UTC m=+4196.746038722" observedRunningTime="2026-02-18 15:09:24.883551489 +0000 UTC m=+4197.379272421" watchObservedRunningTime="2026-02-18 15:09:24.887912639 +0000 UTC m=+4197.383633561" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.871282 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.871754 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.927651 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373177 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373228 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.374085 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.374202 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" gracePeriod=600 Feb 18 15:09:29 crc kubenswrapper[4739]: E0218 15:09:29.516277 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921368 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" exitCode=0 Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921488 4739 scope.go:117] "RemoveContainer" containerID="4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.922343 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:29 crc kubenswrapper[4739]: E0218 15:09:29.922709 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:38 crc kubenswrapper[4739]: I0218 15:09:38.930597 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:38 crc kubenswrapper[4739]: I0218 15:09:38.988493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.031687 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-85jxq" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" containerID="cri-o://fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" gracePeriod=2 Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.588112 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680260 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680359 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.681255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities" (OuterVolumeSpecName: "utilities") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.686570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd" (OuterVolumeSpecName: "kube-api-access-xpnnd") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "kube-api-access-xpnnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.733825 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783841 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783912 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783922 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051253 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" exitCode=0 Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"dee91aaa97ae41d79d63105d8dd698fcc24ad6d895bd98e68ce5202a805eaeec"} Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051518 4739 scope.go:117] "RemoveContainer" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.099675 4739 scope.go:117] "RemoveContainer" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.102043 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.115671 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.129713 4739 scope.go:117] "RemoveContainer" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196341 4739 scope.go:117] "RemoveContainer" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.196757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": container with ID starting with fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2 not found: ID does not exist" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196791 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} err="failed to get container status \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": rpc error: code = NotFound desc = could not find container \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": container with ID starting with fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2 not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196812 4739 scope.go:117] "RemoveContainer" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.197096 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": container with ID starting with d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1 not found: ID does not exist" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197137 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} err="failed to get container status \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": rpc error: code = NotFound desc = could not find container \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": container with ID starting with d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1 not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197168 4739 scope.go:117] "RemoveContainer" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.197615 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": container with ID starting with d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee not found: ID does not exist" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197638 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee"} err="failed to get container status \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": rpc error: code = NotFound desc = could not find container \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": container with ID starting with d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.417274 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.417745 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.427896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" path="/var/lib/kubelet/pods/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68/volumes" Feb 18 15:09:52 crc kubenswrapper[4739]: I0218 15:09:52.412566 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:52 crc kubenswrapper[4739]: E0218 15:09:52.413569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:03 crc kubenswrapper[4739]: I0218 15:10:03.410915 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:03 crc kubenswrapper[4739]: E0218 15:10:03.412024 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:17 crc kubenswrapper[4739]: I0218 15:10:17.413360 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:17 crc kubenswrapper[4739]: E0218 15:10:17.414303 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:30 crc kubenswrapper[4739]: I0218 15:10:30.411349 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:30 crc kubenswrapper[4739]: E0218 15:10:30.412275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:41 crc kubenswrapper[4739]: I0218 15:10:41.410809 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:41 crc kubenswrapper[4739]: E0218 15:10:41.412089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:55 crc kubenswrapper[4739]: I0218 15:10:55.410949 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:55 crc kubenswrapper[4739]: E0218 15:10:55.412014 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:07 crc kubenswrapper[4739]: I0218 15:11:07.411245 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:07 crc kubenswrapper[4739]: E0218 15:11:07.412250 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:20 crc kubenswrapper[4739]: I0218 15:11:20.411492 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:20 crc kubenswrapper[4739]: E0218 15:11:20.412610 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:34 crc kubenswrapper[4739]: I0218 15:11:34.411037 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:34 crc kubenswrapper[4739]: E0218 15:11:34.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:46 crc kubenswrapper[4739]: I0218 15:11:46.410727 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:46 crc kubenswrapper[4739]: E0218 15:11:46.411704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:58 crc kubenswrapper[4739]: I0218 15:11:58.418277 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:58 crc kubenswrapper[4739]: E0218 15:11:58.419230 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:11 crc kubenswrapper[4739]: I0218 15:12:11.410866 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:11 crc kubenswrapper[4739]: E0218 15:12:11.411767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:23 crc kubenswrapper[4739]: I0218 15:12:23.411311 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:23 crc kubenswrapper[4739]: E0218 15:12:23.413475 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:38 crc kubenswrapper[4739]: I0218 15:12:38.420307 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:38 crc kubenswrapper[4739]: E0218 15:12:38.421168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:53 crc kubenswrapper[4739]: I0218 15:12:53.411165 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:53 crc kubenswrapper[4739]: E0218 15:12:53.412038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:08 crc kubenswrapper[4739]: I0218 15:13:08.421187 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:08 crc kubenswrapper[4739]: E0218 15:13:08.422135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:21 crc kubenswrapper[4739]: I0218 15:13:21.411962 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:21 crc kubenswrapper[4739]: E0218 15:13:21.413176 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:32 crc kubenswrapper[4739]: I0218 15:13:32.410942 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:32 crc kubenswrapper[4739]: E0218 15:13:32.411880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.642272 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644405 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-content" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-content" Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644768 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644874 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-utilities" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-utilities" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.645317 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.646532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.653582 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.654163 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.654271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfs6g" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.655752 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.675579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755159 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755220 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755532 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.857926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.857986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858336 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.860306 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.865682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.867926 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.872645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.882755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.912671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.974634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:13:42 crc kubenswrapper[4739]: I0218 15:13:42.496276 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.007950 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.411289 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:43 crc kubenswrapper[4739]: E0218 15:13:43.412717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.684768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerStarted","Data":"49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc"} Feb 18 15:13:56 crc kubenswrapper[4739]: I0218 15:13:56.411375 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:56 crc kubenswrapper[4739]: E0218 15:13:56.412117 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:07 crc kubenswrapper[4739]: I0218 15:14:07.411084 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:07 crc kubenswrapper[4739]: E0218 15:14:07.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:18 crc kubenswrapper[4739]: I0218 15:14:18.411338 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:18 crc kubenswrapper[4739]: E0218 15:14:18.412255 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.462265 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.466572 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-964bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2d70fa76-2eec-4ca5-abd7-44a082625a40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.467811 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" Feb 18 15:14:20 crc kubenswrapper[4739]: E0218 15:14:20.144069 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" Feb 18 15:14:33 crc kubenswrapper[4739]: I0218 15:14:33.410963 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:34 crc kubenswrapper[4739]: I0218 15:14:34.295221 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} Feb 18 15:14:34 crc kubenswrapper[4739]: I0218 15:14:34.880227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:14:37 crc kubenswrapper[4739]: I0218 15:14:37.331123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerStarted","Data":"8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b"} Feb 18 15:14:37 crc kubenswrapper[4739]: I0218 15:14:37.356762 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.487458236 podStartE2EDuration="57.356743168s" podCreationTimestamp="2026-02-18 15:13:40 +0000 UTC" firstStartedPulling="2026-02-18 15:13:43.007631747 +0000 UTC m=+4455.503352669" lastFinishedPulling="2026-02-18 15:14:34.876916679 +0000 UTC m=+4507.372637601" observedRunningTime="2026-02-18 15:14:37.347600688 +0000 UTC m=+4509.843321610" watchObservedRunningTime="2026-02-18 15:14:37.356743168 +0000 UTC m=+4509.852464100" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.182655 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.186598 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.190025 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.190540 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.194982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.195039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.195213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.196930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.297905 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.308297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.321288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.525736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.131549 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:01 crc kubenswrapper[4739]: W0218 15:15:01.138607 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20a5bbeb_3d44_4bb2_8650_b037712d0c02.slice/crio-400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe WatchSource:0}: Error finding container 400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe: Status 404 returned error can't find the container with id 400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.576816 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerStarted","Data":"e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be"} Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.577169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerStarted","Data":"400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe"} Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.596315 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" podStartSLOduration=1.596291828 podStartE2EDuration="1.596291828s" podCreationTimestamp="2026-02-18 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:15:01.595466817 +0000 UTC m=+4534.091187749" watchObservedRunningTime="2026-02-18 15:15:01.596291828 +0000 UTC m=+4534.092012750" Feb 18 15:15:02 crc kubenswrapper[4739]: I0218 15:15:02.592742 4739 generic.go:334] "Generic (PLEG): container finished" podID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerID="e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be" exitCode=0 Feb 18 15:15:02 crc kubenswrapper[4739]: I0218 15:15:02.592791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerDied","Data":"e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be"} Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.287362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331645 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.332822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume" (OuterVolumeSpecName: "config-volume") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.339978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g" (OuterVolumeSpecName: "kube-api-access-4br6g") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "kube-api-access-4br6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.340486 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434572 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434600 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434610 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerDied","Data":"400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe"} Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648776 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648950 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.697216 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.708303 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 15:15:06 crc kubenswrapper[4739]: I0218 15:15:06.435880 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" path="/var/lib/kubelet/pods/87fcc484-b43a-4471-9ae0-a8af18a937be/volumes" Feb 18 15:15:38 crc kubenswrapper[4739]: I0218 15:15:38.283960 4739 scope.go:117] "RemoveContainer" containerID="9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.466516 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:26 crc kubenswrapper[4739]: E0218 15:16:26.473618 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.473722 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.476277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.491413 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.613242 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.691717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.692024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.692045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.797282 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.797397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.827176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.841782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:28 crc kubenswrapper[4739]: I0218 15:16:28.522744 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:28 crc kubenswrapper[4739]: W0218 15:16:28.597691 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23565011_792b_4161_97b4_45ada5703730.slice/crio-8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5 WatchSource:0}: Error finding container 8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5: Status 404 returned error can't find the container with id 8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5 Feb 18 15:16:28 crc kubenswrapper[4739]: I0218 15:16:28.620322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5"} Feb 18 15:16:29 crc kubenswrapper[4739]: I0218 15:16:29.631672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b"} Feb 18 15:16:29 crc kubenswrapper[4739]: I0218 15:16:29.632101 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b" exitCode=0 Feb 18 15:16:31 crc kubenswrapper[4739]: I0218 15:16:31.658923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f"} Feb 18 15:16:33 crc kubenswrapper[4739]: I0218 15:16:33.689865 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f" exitCode=0 Feb 18 15:16:33 crc kubenswrapper[4739]: I0218 15:16:33.689942 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f"} Feb 18 15:16:35 crc kubenswrapper[4739]: I0218 15:16:35.713051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9"} Feb 18 15:16:35 crc kubenswrapper[4739]: I0218 15:16:35.739883 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6wtr" podStartSLOduration=5.065571982 podStartE2EDuration="9.73588393s" podCreationTimestamp="2026-02-18 15:16:26 +0000 UTC" firstStartedPulling="2026-02-18 15:16:29.633635703 +0000 UTC m=+4622.129356625" lastFinishedPulling="2026-02-18 15:16:34.303947651 +0000 UTC m=+4626.799668573" observedRunningTime="2026-02-18 15:16:35.73313467 +0000 UTC m=+4628.228855602" watchObservedRunningTime="2026-02-18 15:16:35.73588393 +0000 UTC m=+4628.231604852" Feb 18 15:16:36 crc kubenswrapper[4739]: I0218 15:16:36.846908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:36 crc kubenswrapper[4739]: I0218 15:16:36.847273 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:38 crc kubenswrapper[4739]: I0218 15:16:38.249104 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:38 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:38 crc kubenswrapper[4739]: > Feb 18 15:16:47 crc kubenswrapper[4739]: I0218 15:16:47.908831 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:47 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:47 crc kubenswrapper[4739]: > Feb 18 15:16:55 crc kubenswrapper[4739]: I0218 15:16:55.018735 4739 trace.go:236] Trace[572744698]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (18-Feb-2026 15:16:53.986) (total time: 1029ms): Feb 18 15:16:55 crc kubenswrapper[4739]: Trace[572744698]: [1.029446844s] [1.029446844s] END Feb 18 15:16:58 crc kubenswrapper[4739]: I0218 15:16:58.040581 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:58 crc kubenswrapper[4739]: > Feb 18 15:16:59 crc kubenswrapper[4739]: I0218 15:16:59.372612 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:16:59 crc kubenswrapper[4739]: I0218 15:16:59.373914 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:06 crc kubenswrapper[4739]: I0218 15:17:06.980320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:07 crc kubenswrapper[4739]: I0218 15:17:07.045380 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:09 crc kubenswrapper[4739]: I0218 15:17:09.186783 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:09 crc kubenswrapper[4739]: I0218 15:17:09.211348 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" containerID="cri-o://cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" gracePeriod=2 Feb 18 15:17:10 crc kubenswrapper[4739]: I0218 15:17:10.155647 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" exitCode=0 Feb 18 15:17:10 crc kubenswrapper[4739]: I0218 15:17:10.155716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9"} Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.490025 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.702231 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities" (OuterVolumeSpecName: "utilities") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.727860 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.755387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7" (OuterVolumeSpecName: "kube-api-access-lmgh7") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "kube-api-access-lmgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.809614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.831915 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.831953 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.216220 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5"} Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.216357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.219825 4739 scope.go:117] "RemoveContainer" containerID="cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.329010 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.346576 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.431082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23565011-792b-4161-97b4-45ada5703730" path="/var/lib/kubelet/pods/23565011-792b-4161-97b4-45ada5703730/volumes" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.453272 4739 scope.go:117] "RemoveContainer" containerID="7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.522719 4739 scope.go:117] "RemoveContainer" containerID="5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.132921 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.138859 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.802197 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.819059 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.637823 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.861382 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.863433 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.861478 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.863776 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.138717 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142065 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142132 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142084 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142250 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.483729 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.604605 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.604600 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.795237 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.795237 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.797679 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.203102 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.674639 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.675028 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.674660 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.675164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.742497 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.742585 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.752050 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.752132 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:19 crc kubenswrapper[4739]: I0218 15:17:19.424993 4739 trace.go:236] Trace[1068730840]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (18-Feb-2026 15:17:16.058) (total time: 3362ms): Feb 18 15:17:19 crc kubenswrapper[4739]: Trace[1068730840]: [3.362307945s] [3.362307945s] END Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.188666 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190069 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190247 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190108 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.741716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.742255 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.782677 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.783004 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827368 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827758 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827401 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.828217 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.837972 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.838055 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.948721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.329606 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.329663 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.330073 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.524687 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.524768 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.525074 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.525111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.536607 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.536687 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.664677 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686174 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686509 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686267 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686806 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.076685 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.111480 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.111549 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.376679 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.376844 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.377086 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.377862 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567374 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567512 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567516 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567573 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567533 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.048781 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.048957 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.049168 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.049223 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.082950 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.083010 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.082982 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.083914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.150078 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.150163 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.433259 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.433339 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.482360 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.482461 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.597827 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.597902 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.664837 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.664776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.741146 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.741214 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.751129 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.751184 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.803232 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.740324 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.740662 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.751558 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.751630 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.793867 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.794013 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.811908 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.812464 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.017706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.018042 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.668840 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.668972 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.763750 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.763779 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.860923 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861051 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861124 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861058 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.138857 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.423683 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:27 crc kubenswrapper[4739]: > Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.429769 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:27 crc kubenswrapper[4739]: > Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.795259 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.795287 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.253697 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.254034 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.689705 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.690046 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.689715 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.690103 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.741891 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.741990 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.751872 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.751968 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.143622 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.144286 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.143798 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.144794 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.373100 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.374272 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.807668 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.813304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.825257 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.832378 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" containerID="cri-o://17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b" gracePeriod=30 Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.005362 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.005321 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.023951 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.024025 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.788778 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.788962 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870728 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870798 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870846 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870949 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953857 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953887 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953924 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953941 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953976 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953978 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953992 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.954001 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.954067 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095839 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095972 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095874 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.096074 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.177691 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261828 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261895 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261942 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261970 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.361776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.361946 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.506718 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.507112 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.506746 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.507421 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.534609 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.534682 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.704699 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787915 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787948 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787982 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787995 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.788349 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787783 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.035752 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.035783 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.118517 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.200061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.200563 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201081 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201143 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201335 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201385 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283708 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283785 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283829 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283867 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283931 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375665 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375793 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375964 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.376004 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525662 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525738 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525813 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525838 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.048490 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.049593 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.048493 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.049726 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083258 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083434 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083605 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083675 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.150335 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.150412 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.223626 4739 trace.go:236] Trace[1608003166]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Feb-2026 15:17:29.224) (total time: 3995ms): Feb 18 15:17:33 crc kubenswrapper[4739]: Trace[1608003166]: [3.995213663s] [3.995213663s] END Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.433077 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.433165 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.484859 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.484930 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.624416 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.624497 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.625001 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740624 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740655 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740689 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740731 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.751961 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.751975 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.752035 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.752081 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.790756 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.790832 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.434604 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.434950 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.640590 4739 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.640912 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8cadd086-3e21-4dfc-9577-356fdcfe83c1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.657020 4739 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.657490 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="d13e1961-45de-4db2-a4cb-04c91c7b18ad" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.129474 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.130213 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.521726 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podUID="d5023d08-507d-422f-b218-72057e18ef93" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.795839 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.795860 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.810778 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.810899 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.820668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.820751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.826161 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.017884 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.018237 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.624698 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.625185 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.761693 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.761809 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794729 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794763 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794789 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794873 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.799175 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.815538 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.815777 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.860686 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863559 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.860831 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863949 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864036 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864811 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864849 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" containerID="cri-o://426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c" gracePeriod=30 Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305812 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305936 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305976 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308744 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308768 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308784 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.315484 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} pod="metallb-system/frr-k8s-w8l6z" containerMessage="Container frr failed liveness probe, will be restarted" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.315615 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" containerID="cri-o://4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378" gracePeriod=2 Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.483883 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.664051 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.664173 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.715141 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.794108 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142257 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142622 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142340 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142727 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252635 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252660 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252792 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.649856 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.650245 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.650317 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.651496 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.651546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" containerID="cri-o://714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef" gracePeriod=30 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.685079 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378" exitCode=143 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.685173 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.688608 4739 generic.go:334] "Generic (PLEG): container finished" podID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerID="17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b" exitCode=0 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.688652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerDied","Data":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691658 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691808 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.740918 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741014 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741354 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741410 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751235 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751298 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751382 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751402 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790804 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790902 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790958 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790947 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.799216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.294534 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.414221 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.416429 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.416678 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.509584 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.518069 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.619194 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.623278 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.718717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"c7a405ca20cfc4b7316f76c9d44bf6f7d68548abd23ace50bc9925377095b1b4"} Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960395 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960474 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960489 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960572 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.107758 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:40 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:40 crc kubenswrapper[4739]: > Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.551611 4739 trace.go:236] Trace[91807219]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (18-Feb-2026 15:17:34.927) (total time: 5619ms): Feb 18 15:17:40 crc kubenswrapper[4739]: Trace[91807219]: [5.61925257s] [5.61925257s] END Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.749711 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.749739 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.758968 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"3b88ad6a451cb11031d26153d44ccaf6530ebcbeea5a0eee1ba554d1ea07e86c"} Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.762602 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerDied","Data":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.767220 4739 generic.go:334] "Generic (PLEG): container finished" podID="26e9543b-d10d-461c-8751-99e53b680e1c" containerID="426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c" exitCode=0 Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.805036 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.819569 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.819728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.823066 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831787 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831873 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831716 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831745 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831988 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.832111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.832657 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872774 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872797 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872893 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873179 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873250 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873285 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873276 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873555 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.876942 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} pod="openshift-console-operator/console-operator-58897d9998-fqdjl" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.877040 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" containerID="cri-o://2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729" gracePeriod=30 Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.913827 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.913901 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.914003 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.996721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.997219 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.997900 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096285 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096350 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096389 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096424 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.104082 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.137882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178769 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178833 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178859 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178930 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303789 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303877 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303939 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303889 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303994 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.304019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303908 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317430 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317532 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317621 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" containerID="cri-o://f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916" gracePeriod=30 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.345061 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.345196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.386772 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469672 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469730 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469791 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469813 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535128 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535438 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535675 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.704691 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.704788 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.705050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745758 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745831 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745830 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745892 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745834 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745989 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.746027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.748074 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.748143 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" containerID="cri-o://d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a" gracePeriod=30 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.849113 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} pod="openshift-ingress/router-default-5444994796-5cdhr" containerMessage="Container router failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.849201 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" containerID="cri-o://3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5" gracePeriod=10 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874594 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874606 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874735 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874671 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.919985 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.920291 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.993800 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.043813 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.085351 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.126736 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168635 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168703 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168777 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.169052 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.170244 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.170303 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" containerID="cri-o://3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8" gracePeriod=170 Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376744 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376816 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377021 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377066 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.378496 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" containerMessage="Container operator failed liveness probe, will be restarted" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.378559 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" containerID="cri-o://f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9" gracePeriod=30 Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.417642 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.563850 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564076 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564122 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564148 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564162 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.563859 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564221 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564238 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564433 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.622805 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.746882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.746922 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.747868 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.875283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"a0176d656322c79a20a90d02d4d53a024199d46465995e35aeee88d383e2c911"} Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.875351 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.916698 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.917162 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.921560 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.921631 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049037 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049046 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049539 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.050113 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.051061 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.051103 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" containerID="cri-o://56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e" gracePeriod=30 Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083522 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083579 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083594 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083645 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.085210 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.085254 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" containerID="cri-o://54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7" gracePeriod=30 Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151100 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151161 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.420662 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.420747 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433569 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433653 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433746 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.482701 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.483015 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.483099 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.564746 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.564826 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.700708 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701003 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701017 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701048 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701162 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701374 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.700708 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701506 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.745056 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.745148 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.752632 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.752709 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.791434 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.791543 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.795701 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xwm5v" podUID="547a8c99-05a3-45bf-9e45-785d6cdb8fb5" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.951188 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.952513 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142600 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142623 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142668 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142676 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.143167 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.146563 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.146624 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" containerID="cri-o://9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b" gracePeriod=30 Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.152187 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.152356 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.435042 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.435107 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.483634 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.483745 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.636584 4739 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.636644 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8cadd086-3e21-4dfc-9577-356fdcfe83c1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.657145 4739 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.657232 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="d13e1961-45de-4db2-a4cb-04c91c7b18ad" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.702733 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.702814 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.743638 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.951628 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.953259 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.128536 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.128601 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.522653 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podUID="d5023d08-507d-422f-b218-72057e18ef93" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796721 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796765 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podUID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerName="sbdb" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796822 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podUID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerName="nbdb" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.797078 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810310 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810689 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860594 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860647 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860674 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860691 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.987703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerDied","Data":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.989247 4739 generic.go:334] "Generic (PLEG): container finished" podID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerID="f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9" exitCode=0 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007265 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-fqdjl_07036c39-40f5-4969-afd0-1003c1eae037/console-operator/0.log" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007379 4739 generic.go:334] "Generic (PLEG): container finished" podID="07036c39-40f5-4969-afd0-1003c1eae037" containerID="2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729" exitCode=1 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007427 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerDied","Data":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.017674 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.017766 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.018306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.018463 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.023810 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.024348 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" containerID="cri-o://51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843" gracePeriod=2 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.667687 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.667733 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.760806 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.760952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.761605 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.761696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.762797 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} pod="metallb-system/controller-69bbfbf88f-tr2nx" containerMessage="Container controller failed liveness probe, will be restarted" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.762893 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" containerID="cri-o://de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9" gracePeriod=2 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.816389 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.206677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.329607 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" podUID="8bf4ed0a-8055-462b-9324-1fa1c4f429b1" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.329825 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371669 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371749 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371782 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": EOF" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371827 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371873 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437526 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437577 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437799 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.524693 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.524640 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.794888 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046260 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-fqdjl_07036c39-40f5-4969-afd0-1003c1eae037/console-operator/0.log" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"448c4bf1c14bd59255ae71526fdd326b53d90eea5f151e24381bbae63e4aa0c2"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.047287 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.047333 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.049874 4739 generic.go:334] "Generic (PLEG): container finished" podID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerID="51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.049948 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerDied","Data":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.053408 4739 generic.go:334] "Generic (PLEG): container finished" podID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerID="de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.053509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerDied","Data":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.055703 4739 generic.go:334] "Generic (PLEG): container finished" podID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerID="d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.055742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerDied","Data":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252785 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252795 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.254012 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} pod="metallb-system/speaker-8gqkq" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.254108 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" containerID="cri-o://e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0" gracePeriod=2 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.607426 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745749 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745820 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745821 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745891 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752301 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752363 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752301 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752825 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.792631 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.792702 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.822209 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086015 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086050 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086092 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086117 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.116252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"fcb7a2a732a4e62cfe6cc4e0b6ca5e900a768e98885a703266d0a7cb837318fb"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117245 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117511 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117744 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.139468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"027fa5b895c9d2041710b3aecf7247baa77cc62da23c5b99c574829a89498229"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.140176 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.140233 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.142261 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.142306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.153699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"c897f89bd17bf83567088a7d419fd6c874771118a180dbd13b1b768c5af07ce3"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.155352 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.168923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"e79657c8c634b205d921089dfeb80a880b25482cb3abfbc711a1d89f86580bf9"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.168975 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.172751 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.172819 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.208510 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 15:17:49 crc kubenswrapper[4739]: E0218 15:17:49.257646 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.294949 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.806408 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.807044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.808073 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.808197 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.820726 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} pod="openstack-operators/openstack-operator-index-cnhvq" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.820794 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" containerID="cri-o://f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" gracePeriod=30 Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825131 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825169 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825216 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825262 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.926803 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.926877 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.960866 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.960935 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961093 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961161 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.962388 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.181251 4739 generic.go:334] "Generic (PLEG): container finished" podID="65fdc711-6806-433f-9f62-a09e816c6acf" containerID="e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0" exitCode=0 Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.181587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerDied","Data":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.182939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183035 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183079 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183328 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183386 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324710 4739 trace.go:236] Trace[949522613]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (18-Feb-2026 15:17:46.654) (total time: 3666ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[949522613]: [3.666445139s] [3.666445139s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324711 4739 trace.go:236] Trace[1410388390]: "Calculate volume metrics of storage for pod minio-dev/minio" (18-Feb-2026 15:17:47.993) (total time: 2326ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[1410388390]: [2.326921222s] [2.326921222s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324789 4739 trace.go:236] Trace[1685105932]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (18-Feb-2026 15:17:44.729) (total time: 5591ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[1685105932]: [5.591545492s] [5.591545492s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.622851 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623048 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623173 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623624 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.793211 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.794548 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.795818 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.796014 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.798395 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.874702 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.874713 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916671 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916695 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916746 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916829 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916853 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916892 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916910 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916947 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.917024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.999590 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.999666 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:50.999723 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018850 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018921 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018998 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019158 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019190 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019233 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019252 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102584 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102638 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102839 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102859 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.185437 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.192425 4739 generic.go:334] "Generic (PLEG): container finished" podID="d34f7233-92b8-4803-ab81-0da45a4de925" containerID="056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b" exitCode=1 Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.192549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerDied","Data":"056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b"} Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.196462 4739 scope.go:117] "RemoveContainer" containerID="056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.226857 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.226942 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227815 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227981 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.228226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269819 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269885 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269949 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270006 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270088 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270872 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270916 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" containerID="cri-o://71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478" gracePeriod=30 Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.311990 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.312412 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.394647 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395016 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395075 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395202 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436788 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436898 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436993 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436994 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437078 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437203 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437261 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437932 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521651 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521731 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521920 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521942 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.522076 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.522597 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.535143 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.535194 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.704688 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.704739 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.711246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.793486 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.793486 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.820099 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:51 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:51 crc kubenswrapper[4739]: > Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.913691 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.033700 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.034188 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.034274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.121286 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.207094 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.207109 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.233721 4739 generic.go:334] "Generic (PLEG): container finished" podID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerID="71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478" exitCode=0 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.233788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerDied","Data":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.240365 4739 generic.go:334] "Generic (PLEG): container finished" podID="52927612-b074-4573-aa63-41cbb1d704bf" containerID="d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.240408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerDied","Data":"d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.241208 4739 scope.go:117] "RemoveContainer" containerID="d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.245799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"61f0f91a573ef08cafaed2521fe7636c043699e262f99e85e72b59d65a49984d"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.246321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250523 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250824 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250532 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250683 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250865 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251119 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251142 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251177 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251354 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259821 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259869 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259970 4739 scope.go:117] "RemoveContainer" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.260637 4739 scope.go:117] "RemoveContainer" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.273654 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-5cdhr_b6cef9b9-56ee-4d0a-8c13-651e3f649a0e/router/0.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.273726 4739 generic.go:334] "Generic (PLEG): container finished" podID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerID="3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5" exitCode=137 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.274019 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerDied","Data":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.281060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"440e6130bf39495a0a02b8d3fb998fc9dd6f7539606395b70c0d7a272ff71405"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.281328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.286866 4739 generic.go:334] "Generic (PLEG): container finished" podID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerID="a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.286908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerDied","Data":"a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.287697 4739 scope.go:117] "RemoveContainer" containerID="a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.316016 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.406554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.455277 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.505900 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.506431 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.580588 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.621266 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.705763 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.806573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.150503 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.150865 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.309607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"03b57e74e2832a74da57d2fde6055e5aeb34fdccec0ed1be93f8003848aff1f5"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311286 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311330 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.323429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"488b2ecf524bc1de7290bcb09e9216f76ae0e392d0c99030d98f7041ceab1a52"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.324600 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.338622 4739 generic.go:334] "Generic (PLEG): container finished" podID="07815587-810f-4c17-a671-8c613b3755d6" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.338703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerDied","Data":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.349089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-5cdhr_b6cef9b9-56ee-4d0a-8c13-651e3f649a0e/router/0.log" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.349402 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"2f0befe19ae7e085bfe950f663a17fd08137434c2f62964664fd6ccfa5efae50"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.358265 4739 generic.go:334] "Generic (PLEG): container finished" podID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerID="668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad" exitCode=1 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.358357 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerDied","Data":"668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.361106 4739 scope.go:117] "RemoveContainer" containerID="668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.368161 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"c0bd26b1eb604066c51f89590061ee9e97354fd35980d19ee79cbb3136a5cdf9"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.368294 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.373920 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5023d08-507d-422f-b218-72057e18ef93" containerID="f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2" exitCode=1 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.373982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerDied","Data":"f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.375088 4739 scope.go:117] "RemoveContainer" containerID="f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.381513 4739 generic.go:334] "Generic (PLEG): container finished" podID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerID="54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.381568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerDied","Data":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.388168 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerID="9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.388349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerDied","Data":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.394915 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.398653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.434530 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.434590 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.466174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.700916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.702714 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.702768 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.899046 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" containerID="cri-o://9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" gracePeriod=13 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.054183 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" containerID="cri-o://fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" gracePeriod=12 Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.119764 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.121795 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.125996 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.126088 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.494109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495294 4739 generic.go:334] "Generic (PLEG): container finished" podID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerID="56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerDied","Data":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"7c10b9e576e31dbde16f6b2eb7e02d83eca868dcd2ec43c014f384aeb777572b"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.497615 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.504941 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.521223 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.536510 4739 generic.go:334] "Generic (PLEG): container finished" podID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerID="f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.536527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerDied","Data":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.555048 4739 generic.go:334] "Generic (PLEG): container finished" podID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerID="0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c" exitCode=1 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.555425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerDied","Data":"0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.556203 4739 scope.go:117] "RemoveContainer" containerID="0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.569429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"5fc6aa4b3588196d6933d4bba39468b97269f5d68cc2cb1575e3abf3537fa7f5"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.572105 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.585299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"64f53dfe7f249fc8322fc491805a4d05a0c7aa19f694870fac378263c9063db2"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601492 4739 generic.go:334] "Generic (PLEG): container finished" podID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerID="714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerDied","Data":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"94cb00501c3a4d5e6ef68c5c3c525d7e53ae8dde475b2057415555bb90e3594a"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609822 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609988 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.619109 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.643539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"b6c27ac8b74af2cdee13930c50556fa3bb6aee4a701357ae58cc42f5641ac48e"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.645141 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.645284 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.651583 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.651722 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.673306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"f79789dc96cf5f56387b2e936fca0f9a26d35a541872d8c725e60632ac6f0364"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674576 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674658 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674690 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.682552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"229da6e40d3834292e453f286cca0fae54c4832c5c40a0aaf2e0d0615a7a5a0d"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.682983 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683025 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683152 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683238 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.702831 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.702891 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.142573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.593010 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.703847 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.704091 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.716756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"1400e06ac9c3667644ccc8d255a9a7d8beb088beaa9ca0022d782476d48f59fe"} Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730162 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"7ca101f91603600ce60b4e3e60d9e95e6228058d92afadaa466ea2ea9808746e"} Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730364 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730413 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.731196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733887 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733935 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733898 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733998 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.734651 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.734692 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.898681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.020605 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.024611 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.032801 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.032916 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.616668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.701508 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.701565 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.739020 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.739069 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.514774 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.606955 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607031 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607067 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607130 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.703782 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 15:17:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 15:17:57 crc kubenswrapper[4739]: [+]process-running ok Feb 18 15:17:57 crc kubenswrapper[4739]: healthz check failed Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.704101 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.722776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.753726 4739 generic.go:334] "Generic (PLEG): container finished" podID="869aa11b-eba7-4598-90dc-d50c642b9120" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" exitCode=0 Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.753780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerDied","Data":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143075 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143110 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143555 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143471 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.708165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.709865 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.716210 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.766172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"d07be60b5be3e4f85a67dfb8c57d155a8c34e5d0eef291f493a34dc8761e4361"} Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.966111 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373353 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373825 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.374978 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.375059 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" gracePeriod=600 Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.803890 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" exitCode=0 Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.803960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.815835 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.852648 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.857463 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:17:59 crc kubenswrapper[4739]: E0218 15:17:59.890225 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947a1bc9_4557_4cd9_aa90_9d3893aad914.slice/crio-conmon-3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c.scope\": RecentStats: unable to find data in memory cache]" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.022294 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.097467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.106992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.351333 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.538886 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.623340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.627024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.694324 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819828 4739 generic.go:334] "Generic (PLEG): container finished" podID="acc9bbc5-8705-410b-977b-ca9245834e36" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" exitCode=0 Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerDied","Data":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"68907d712cca3f7de51c445863d41f9dd8dfa7fa7896e8b60ec1027b8593cae6"} Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.824333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.069506 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.148546 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.295623 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.505083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648034 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648413 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648513 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.052230 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.086596 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.155949 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.438593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.018563 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.018680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.019815 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.019878 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" containerID="cri-o://c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" gracePeriod=30 Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.228676 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230748 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230773 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230802 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-utilities" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230808 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-utilities" Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230828 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-content" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-content" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.231127 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.242616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.316753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.316947 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.317080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.338258 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.419609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420594 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420602 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.421165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.457684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.574809 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.115004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.115378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.283203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.595561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.595899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.924420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.939712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.974014 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.683776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887270 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" exitCode=0 Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb"} Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"3c5afd23ff36fa31fef20ba87125c6becb20c52925093543710d4d4c92ef82c5"} Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.009354 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.010010 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.449889 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.901944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.176934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gqkq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.203858 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.206611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.248872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.327656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.327965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.328544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.430964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.431137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.431194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.432370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.432426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.452259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.540151 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.611907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.392295 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.951794 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.952034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956599 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"1d350450ce9c4bc4c65ff0ae502f9f800a652d5d5a2f99a2c8e967161fb37f2b"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.967429 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerID="c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.967508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerDied","Data":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} Feb 18 15:18:09 crc kubenswrapper[4739]: I0218 15:18:09.987594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} Feb 18 15:18:09 crc kubenswrapper[4739]: I0218 15:18:09.995108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} Feb 18 15:18:10 crc kubenswrapper[4739]: I0218 15:18:10.008649 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qkbjq" podStartSLOduration=3.2616291889999998 podStartE2EDuration="7.007774678s" podCreationTimestamp="2026-02-18 15:18:03 +0000 UTC" firstStartedPulling="2026-02-18 15:18:05.890281019 +0000 UTC m=+4718.386001941" lastFinishedPulling="2026-02-18 15:18:09.636426508 +0000 UTC m=+4722.132147430" observedRunningTime="2026-02-18 15:18:10.006689531 +0000 UTC m=+4722.502410473" watchObservedRunningTime="2026-02-18 15:18:10.007774678 +0000 UTC m=+4722.503495600" Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.021770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.648248 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.648496 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.575562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.576200 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.645681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:14 crc kubenswrapper[4739]: I0218 15:18:14.109781 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.196719 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.299271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" containerID="cri-o://873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4" gracePeriod=15 Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.997747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.023839 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.075955 4739 generic.go:334] "Generic (PLEG): container finished" podID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerID="873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4" exitCode=0 Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerDied","Data":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076116 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"5a2ae0a4472b6c53563aa19aa5c52aa81a233460c30511d55d6d99288b66a85a"} Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076185 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qkbjq" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" containerID="cri-o://8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" gracePeriod=2 Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076718 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076758 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.015963 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033818 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.035926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities" (OuterVolumeSpecName: "utilities") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.045360 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q" (OuterVolumeSpecName: "kube-api-access-pgq8q") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "kube-api-access-pgq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.107914 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" exitCode=0 Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"3c5afd23ff36fa31fef20ba87125c6becb20c52925093543710d4d4c92ef82c5"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108195 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108354 4739 scope.go:117] "RemoveContainer" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.119704 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" exitCode=0 Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.120210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.123532 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.127897 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137942 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137971 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137979 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.177991 4739 scope.go:117] "RemoveContainer" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.236236 4739 scope.go:117] "RemoveContainer" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.274590 4739 scope.go:117] "RemoveContainer" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275156 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": container with ID starting with 8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad not found: ID does not exist" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275193 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} err="failed to get container status \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": rpc error: code = NotFound desc = could not find container \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": container with ID starting with 8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275219 4739 scope.go:117] "RemoveContainer" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275585 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": container with ID starting with e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7 not found: ID does not exist" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275612 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} err="failed to get container status \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": rpc error: code = NotFound desc = could not find container \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": container with ID starting with e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7 not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275629 4739 scope.go:117] "RemoveContainer" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275962 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": container with ID starting with 7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb not found: ID does not exist" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275995 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb"} err="failed to get container status \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": rpc error: code = NotFound desc = could not find container \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": container with ID starting with 7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.445222 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.466982 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.136356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.163651 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kdgq" podStartSLOduration=2.543031862 podStartE2EDuration="11.163631291s" podCreationTimestamp="2026-02-18 15:18:07 +0000 UTC" firstStartedPulling="2026-02-18 15:18:08.959200411 +0000 UTC m=+4721.454921333" lastFinishedPulling="2026-02-18 15:18:17.57979984 +0000 UTC m=+4730.075520762" observedRunningTime="2026-02-18 15:18:18.15725082 +0000 UTC m=+4730.652971762" watchObservedRunningTime="2026-02-18 15:18:18.163631291 +0000 UTC m=+4730.659352213" Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.426937 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" path="/var/lib/kubelet/pods/82060158-06b2-4cf9-9f4a-57fe3e3b9916/volumes" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.023008 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649546 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649606 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649660 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.651187 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.651345 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb" gracePeriod=30 Feb 18 15:18:24 crc kubenswrapper[4739]: I0218 15:18:24.483030 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:18:26 crc kubenswrapper[4739]: I0218 15:18:26.022067 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:27 crc kubenswrapper[4739]: I0218 15:18:27.540571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:27 crc kubenswrapper[4739]: I0218 15:18:27.540963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:28 crc kubenswrapper[4739]: I0218 15:18:28.623872 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:28 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:28 crc kubenswrapper[4739]: > Feb 18 15:18:31 crc kubenswrapper[4739]: I0218 15:18:31.014873 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:36 crc kubenswrapper[4739]: I0218 15:18:36.018225 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:38 crc kubenswrapper[4739]: I0218 15:18:38.595912 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:38 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:38 crc kubenswrapper[4739]: > Feb 18 15:18:41 crc kubenswrapper[4739]: I0218 15:18:41.020563 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.043883 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.470125 4739 generic.go:334] "Generic (PLEG): container finished" podID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerID="8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b" exitCode=1 Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.470169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerDied","Data":"8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b"} Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.033792 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202069 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202241 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202484 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.203282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.203869 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data" (OuterVolumeSpecName: "config-data") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.209644 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.246150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.290308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.292623 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz" (OuterVolumeSpecName: "kube-api-access-964bz") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "kube-api-access-964bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305482 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305526 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305538 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305547 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305556 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305582 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.306182 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.333397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.343952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.353610 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408176 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408547 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408661 4739 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408750 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerDied","Data":"49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc"} Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504631 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.602270 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:48 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:48 crc kubenswrapper[4739]: > Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.766892 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768081 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768101 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768167 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768199 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-utilities" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768208 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-utilities" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768224 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-content" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768231 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-content" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768542 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768578 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.769593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.774098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfs6g" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.779500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.867269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.867537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.970282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.970376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.971079 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.995350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.012308 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.018212 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.101824 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.705992 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.713950 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.557505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.559992 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562631 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb" exitCode=137 Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562738 4739 scope.go:117] "RemoveContainer" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.565383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fafc1147-dd3a-429c-ae6f-48865401c68b","Type":"ContainerStarted","Data":"08c8e2669c846005f6059f146d98779ae2b4e462d895c79341686389411ee000"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.584589 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.587541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a81dd773adf0ddee9b34eb2f33f2c3c798fa05884811efd4b1dff5fa5252df71"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.589605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fafc1147-dd3a-429c-ae6f-48865401c68b","Type":"ContainerStarted","Data":"fb3322bdc5fbf1408dfee781cff5b9ab1904ac38c88d720ff5a08732d504f1bd"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.629240 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.25632063 podStartE2EDuration="3.62922077s" podCreationTimestamp="2026-02-18 15:18:50 +0000 UTC" firstStartedPulling="2026-02-18 15:18:51.713660998 +0000 UTC m=+4764.209381920" lastFinishedPulling="2026-02-18 15:18:53.086561128 +0000 UTC m=+4765.582282060" observedRunningTime="2026-02-18 15:18:53.624108291 +0000 UTC m=+4766.119829223" watchObservedRunningTime="2026-02-18 15:18:53.62922077 +0000 UTC m=+4766.124941692" Feb 18 15:18:56 crc kubenswrapper[4739]: I0218 15:18:56.022764 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:57 crc kubenswrapper[4739]: I0218 15:18:57.722214 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:58 crc kubenswrapper[4739]: I0218 15:18:58.595459 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:58 crc kubenswrapper[4739]: > Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.019726 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.648179 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.651982 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:06 crc kubenswrapper[4739]: I0218 15:19:06.138095 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.593143 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.650304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.727698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:08 crc kubenswrapper[4739]: I0218 15:19:08.425985 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.154525 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" containerID="cri-o://58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" gracePeriod=2 Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.773420 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904208 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.905090 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities" (OuterVolumeSpecName: "utilities") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.905461 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.033365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.110825 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170318 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" exitCode=0 Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170389 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"1d350450ce9c4bc4c65ff0ae502f9f800a652d5d5a2f99a2c8e967161fb37f2b"} Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170778 4739 scope.go:117] "RemoveContainer" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.205412 4739 scope.go:117] "RemoveContainer" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.623977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28" (OuterVolumeSpecName: "kube-api-access-pkn28") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "kube-api-access-pkn28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.633774 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.686580 4739 scope.go:117] "RemoveContainer" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.774838 4739 scope.go:117] "RemoveContainer" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.775658 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": container with ID starting with 58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41 not found: ID does not exist" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.775724 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} err="failed to get container status \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": rpc error: code = NotFound desc = could not find container \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": container with ID starting with 58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.775762 4739 scope.go:117] "RemoveContainer" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.776136 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": container with ID starting with eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1 not found: ID does not exist" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776182 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} err="failed to get container status \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": rpc error: code = NotFound desc = could not find container \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": container with ID starting with eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776243 4739 scope.go:117] "RemoveContainer" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.776563 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": container with ID starting with 5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090 not found: ID does not exist" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776592 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090"} err="failed to get container status \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": rpc error: code = NotFound desc = could not find container \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": container with ID starting with 5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.863283 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.876313 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.017310 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.017405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.018472 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.018530 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" containerID="cri-o://14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37" gracePeriod=30 Feb 18 15:19:12 crc kubenswrapper[4739]: I0218 15:19:12.424288 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" path="/var/lib/kubelet/pods/3a13d0fc-5518-446d-8ce5-32db175f8570/volumes" Feb 18 15:19:13 crc kubenswrapper[4739]: I0218 15:19:13.626639 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 15:19:13 crc kubenswrapper[4739]: I0218 15:19:13.805155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 15:19:15 crc kubenswrapper[4739]: I0218 15:19:15.034097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 15:19:15 crc kubenswrapper[4739]: I0218 15:19:15.152864 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.532935 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerID="14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37" exitCode=137 Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.533411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerDied","Data":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.533467 4739 scope.go:117] "RemoveContainer" containerID="c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" Feb 18 15:19:43 crc kubenswrapper[4739]: I0218 15:19:43.642484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"4e7eefd05554da540ad3b190cd2d33f16c7b3628d6ddec497c855a8642997bf8"} Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490148 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.490930 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-utilities" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490949 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-utilities" Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.490965 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490977 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.491032 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-content" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.491038 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-content" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.491325 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.493582 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.498095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-26llf"/"default-dockercfg-lhmph" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.500564 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-26llf"/"openshift-service-ca.crt" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.511099 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-26llf"/"kube-root-ca.crt" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.524064 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.614212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.614273 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.717105 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.717175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.718356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.739433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.814877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.409145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.666074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"7fd91882f1ac653843f4cd5b72d79b58ed24b2c8236be2c5a2b3ea911970f5fb"} Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.996496 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:19:51 crc kubenswrapper[4739]: I0218 15:19:51.028963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 15:19:53 crc kubenswrapper[4739]: I0218 15:19:53.775583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} Feb 18 15:19:54 crc kubenswrapper[4739]: I0218 15:19:54.789508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276"} Feb 18 15:19:54 crc kubenswrapper[4739]: I0218 15:19:54.811510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-26llf/must-gather-vps8f" podStartSLOduration=2.697307689 podStartE2EDuration="10.8114873s" podCreationTimestamp="2026-02-18 15:19:44 +0000 UTC" firstStartedPulling="2026-02-18 15:19:45.40759195 +0000 UTC m=+4817.903312872" lastFinishedPulling="2026-02-18 15:19:53.521771561 +0000 UTC m=+4826.017492483" observedRunningTime="2026-02-18 15:19:54.803490368 +0000 UTC m=+4827.299211280" watchObservedRunningTime="2026-02-18 15:19:54.8114873 +0000 UTC m=+4827.307208222" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.649154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.651502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.727635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.727721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.852098 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.973910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:20:00 crc kubenswrapper[4739]: W0218 15:20:00.118948 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456c8847_a5c4_43ee_8d46_eb5bf5b8c5d6.slice/crio-b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368 WatchSource:0}: Error finding container b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368: Status 404 returned error can't find the container with id b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368 Feb 18 15:20:00 crc kubenswrapper[4739]: I0218 15:20:00.858246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerStarted","Data":"b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368"} Feb 18 15:20:13 crc kubenswrapper[4739]: I0218 15:20:13.040258 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerID="3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8" exitCode=0 Feb 18 15:20:13 crc kubenswrapper[4739]: I0218 15:20:13.040344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerDied","Data":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.065083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerStarted","Data":"dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.068502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"7948fcc51192a2e4056032987b29fa6cf39414a1ecb40405336d586c238b0116"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.096175 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-26llf/crc-debug-kp8qw" podStartSLOduration=2.172356903 podStartE2EDuration="16.096150687s" podCreationTimestamp="2026-02-18 15:19:59 +0000 UTC" firstStartedPulling="2026-02-18 15:20:00.122277717 +0000 UTC m=+4832.617998639" lastFinishedPulling="2026-02-18 15:20:14.046071501 +0000 UTC m=+4846.541792423" observedRunningTime="2026-02-18 15:20:15.089793917 +0000 UTC m=+4847.585514859" watchObservedRunningTime="2026-02-18 15:20:15.096150687 +0000 UTC m=+4847.591871619" Feb 18 15:20:29 crc kubenswrapper[4739]: I0218 15:20:29.372625 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:20:29 crc kubenswrapper[4739]: I0218 15:20:29.373234 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:20:31 crc kubenswrapper[4739]: I0218 15:20:31.109729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:31 crc kubenswrapper[4739]: I0218 15:20:31.110082 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:51 crc kubenswrapper[4739]: I0218 15:20:51.125434 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:51 crc kubenswrapper[4739]: I0218 15:20:51.134387 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:59 crc kubenswrapper[4739]: I0218 15:20:59.372827 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:20:59 crc kubenswrapper[4739]: I0218 15:20:59.373397 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:21:02 crc kubenswrapper[4739]: I0218 15:21:02.658238 4739 generic.go:334] "Generic (PLEG): container finished" podID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerID="dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb" exitCode=0 Feb 18 15:21:02 crc kubenswrapper[4739]: I0218 15:21:02.658840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerDied","Data":"dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb"} Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.811606 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.851269 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.864303 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942085 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942836 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host" (OuterVolumeSpecName: "host") pod "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" (UID: "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.950898 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4" (OuterVolumeSpecName: "kube-api-access-mf8f4") pod "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" (UID: "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6"). InnerVolumeSpecName "kube-api-access-mf8f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.045537 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.045790 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.423872 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" path="/var/lib/kubelet/pods/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6/volumes" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.693012 4739 scope.go:117] "RemoveContainer" containerID="dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.693094 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057041 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:05 crc kubenswrapper[4739]: E0218 15:21:05.057653 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057669 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057954 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.058995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.172957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.173199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.294667 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.376644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.705663 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-jv5kf" event={"ID":"e2ca88e2-2ad2-41a2-ae52-74cf09b22275","Type":"ContainerStarted","Data":"fdddc444b46cae2a673c333898ccb719c75ad18fc7f6169f3dc5744334119cf3"} Feb 18 15:21:06 crc kubenswrapper[4739]: I0218 15:21:06.718949 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerID="7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93" exitCode=0 Feb 18 15:21:06 crc kubenswrapper[4739]: I0218 15:21:06.719031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-jv5kf" event={"ID":"e2ca88e2-2ad2-41a2-ae52-74cf09b22275","Type":"ContainerDied","Data":"7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93"} Feb 18 15:21:07 crc kubenswrapper[4739]: I0218 15:21:07.884516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047760 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host" (OuterVolumeSpecName: "host") pod "e2ca88e2-2ad2-41a2-ae52-74cf09b22275" (UID: "e2ca88e2-2ad2-41a2-ae52-74cf09b22275"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.048170 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.053095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw" (OuterVolumeSpecName: "kube-api-access-xchnw") pod "e2ca88e2-2ad2-41a2-ae52-74cf09b22275" (UID: "e2ca88e2-2ad2-41a2-ae52-74cf09b22275"). InnerVolumeSpecName "kube-api-access-xchnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.152589 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.628310 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.640458 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.742372 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdddc444b46cae2a673c333898ccb719c75ad18fc7f6169f3dc5744334119cf3" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.742439 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.790085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:09 crc kubenswrapper[4739]: E0218 15:21:09.790766 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.790784 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.791092 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.792151 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.898657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.898749 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.298549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.418515 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.428037 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" path="/var/lib/kubelet/pods/e2ca88e2-2ad2-41a2-ae52-74cf09b22275/volumes" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.765875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-csfkl" event={"ID":"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d","Type":"ContainerStarted","Data":"067acd8d691f5999900b21458078b086cf69dbe3dd8ced0d38139f6e5b8c731c"} Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.778648 4739 generic.go:334] "Generic (PLEG): container finished" podID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerID="ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e" exitCode=0 Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.778712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-csfkl" event={"ID":"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d","Type":"ContainerDied","Data":"ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e"} Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.821391 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.832846 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:12 crc kubenswrapper[4739]: I0218 15:21:12.919475 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.093780 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.093937 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.094040 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host" (OuterVolumeSpecName: "host") pod "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" (UID: "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.094807 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.117000 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg" (OuterVolumeSpecName: "kube-api-access-dztqg") pod "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" (UID: "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d"). InnerVolumeSpecName "kube-api-access-dztqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.197893 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.804045 4739 scope.go:117] "RemoveContainer" containerID="ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.804096 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:14 crc kubenswrapper[4739]: I0218 15:21:14.423597 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" path="/var/lib/kubelet/pods/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d/volumes" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.372568 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.373216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.373270 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.374302 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.374368 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" gracePeriod=600 Feb 18 15:21:29 crc kubenswrapper[4739]: E0218 15:21:29.495096 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013356 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" exitCode=0 Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013483 4739 scope.go:117] "RemoveContainer" containerID="3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.014482 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:30 crc kubenswrapper[4739]: E0218 15:21:30.014885 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:41 crc kubenswrapper[4739]: I0218 15:21:41.411162 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:41 crc kubenswrapper[4739]: E0218 15:21:41.422181 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.100978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-api/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.877151 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-listener/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.899046 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-notifier/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.957707 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-evaluator/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.103444 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fccfc9568-dvccq_aca969df-0549-4d07-ada4-2e0515419a1d/barbican-api/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.129357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fccfc9568-dvccq_aca969df-0549-4d07-ada4-2e0515419a1d/barbican-api-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.216845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575dbd86bd-gjcs6_8f41089a-bbe1-4371-9a89-38423dca256c/barbican-keystone-listener/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.381116 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575dbd86bd-gjcs6_8f41089a-bbe1-4371-9a89-38423dca256c/barbican-keystone-listener-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.406443 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765d88ff9c-smd7n_53848a1c-a5c5-4948-a45f-2ba01bc166ca/barbican-worker/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.459985 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765d88ff9c-smd7n_53848a1c-a5c5-4948-a45f-2ba01bc166ca/barbican-worker-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.609197 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f_64a6af44-5f38-4ac7-a370-74b190762136/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.690761 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-central-agent/1.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.867560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-central-agent/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.900543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/proxy-httpd/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.904094 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-notification-agent/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.968864 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/sg-core/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.128213 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_54fd1c90-48dd-4ae7-b2db-d80aa5f14a24/cinder-api-log/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.150370 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_54fd1c90-48dd-4ae7-b2db-d80aa5f14a24/cinder-api/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.326796 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/cinder-scheduler/2.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.400584 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/probe/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.403168 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/cinder-scheduler/1.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.570622 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-74l2j_c3fe82f6-0603-44f2-95fa-57ce24505d2c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.666072 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9jq24_8795d84c-3a90-438c-8f2b-066cd875316d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.790065 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/init/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.962342 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/init/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.009417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv_ed059e6b-2560-487a-98a8-c1443d31cca9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.024045 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/dnsmasq-dns/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.226704 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ac763f9f-5faa-4559-8d07-960b3d30566b/glance-httpd/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.448186 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ac763f9f-5faa-4559-8d07-960b3d30566b/glance-log/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.451254 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_43f517de-033c-467c-9937-df5706ee1ca2/glance-log/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.477088 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_43f517de-033c-467c-9937-df5706ee1ca2/glance-httpd/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.276061 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5cfc6d5787-cxgnr_9c65abc8-9ca5-4a28-89d7-f5ffe23d1040/heat-api/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.283837 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5957545cb-6lrc2_26539513-f274-471e-ad4a-10bcd4758458/heat-engine/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.365701 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-8dd984b75-2cjs7_ecd1f6fa-009d-4942-98ad-203c31a7bf5b/heat-cfnapi/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.419747 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-klrh7_fc5c5a16-015a-48fe-a2c1-1954543e14bd/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.570765 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vglv4_af925314-bcd8-4373-b57e-612251a9687a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.779692 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523781-z64zk_28825764-dace-4769-b71e-4d55b8aa1d97/keystone-cron/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.879764 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e688eb1-895d-465e-b5d9-a7b7ba9f4650/kube-state-metrics/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.210742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-znm2n_bd7dea6a-d047-4a6c-809f-395a7cf418e8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.220022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-nsjkf_61bf8a46-92c1-4b2e-9b8c-8206c618b98a/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.304297 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7dff988c46-72t9g_74cf9632-a7c0-4b6e-98ce-ebd6411a6594/keystone-api/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.486711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_8143c3df-5224-4095-a65f-f9f005913b61/mysqld-exporter/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.816891 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cbbcb957-6xzzv_6225bd93-c14b-4682-8e07-e6ca3cce37c9/neutron-httpd/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.887984 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j_015603d5-7d09-4388-a5d1-93c25d1b6344/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.902610 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cbbcb957-6xzzv_6225bd93-c14b-4682-8e07-e6ca3cce37c9/neutron-api/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.639743 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c35bd35d-d228-4223-a207-ea164d0c6b23/nova-cell0-conductor-conductor/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.671678 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3797374a-f0e4-4ba5-8974-c0049bad543a/nova-api-log/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.891303 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0/nova-cell1-conductor-conductor/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.013670 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ea00e513-02cf-4951-b9ec-50966f982142/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.049521 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3797374a-f0e4-4ba5-8974-c0049bad543a/nova-api-api/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.185832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mwcgw_08b26802-db14-4190-99d1-9c9c7403195b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.377501 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2ab30c1a-7b94-430a-ac85-ebe051fadbfe/nova-metadata-log/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.750269 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/mysql-bootstrap/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.825009 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba769c63-86fa-4971-afd8-4e3a57c94c37/nova-scheduler-scheduler/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.988400 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/galera/1.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.013797 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.070313 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/galera/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.274505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.537271 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/galera/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.573802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.610229 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/galera/1.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.147663 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2ab30c1a-7b94-430a-ac85-ebe051fadbfe/nova-metadata-metadata/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.412868 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:54 crc kubenswrapper[4739]: E0218 15:21:54.414511 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.531081 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6699e575-f077-433c-a257-f65f329d6e69/openstackclient/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.548384 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q6g47_8daa97ee-3449-4043-8218-71aaa601c37c/openstack-network-exporter/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.703702 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_39286c8b-55e8-41a2-9f36-a7ce475e8313/memcached/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.720556 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server-init/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.870775 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server-init/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.899598 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.911610 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zz64p_7289493d-f197-436b-bc45-84721d12c034/ovn-controller/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.920182 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovs-vswitchd/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.155166 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-g8rqb_c4382bff-5480-4a55-ad49-e6293729f738/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.229028 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3be45be-9ee4-4114-b2e5-78d9b0341129/ovn-northd/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.235399 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3be45be-9ee4-4114-b2e5-78d9b0341129/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.383733 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_22289461-6c53-461c-adfe-0f1cd7209928/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.393030 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_22289461-6c53-461c-adfe-0f1cd7209928/ovsdbserver-nb/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.639641 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74c434ad-eea8-4896-b65d-26eb1ca89f84/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.786329 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74c434ad-eea8-4896-b65d-26eb1ca89f84/ovsdbserver-sb/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.931226 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65fbfb5b48-rchlc_38710bdf-e679-45f4-b3a6-597a3b1cb186/placement-api/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.956388 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65fbfb5b48-rchlc_38710bdf-e679-45f4-b3a6-597a3b1cb186/placement-log/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.672157 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/init-config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.838967 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/init-config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.868359 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/thanos-sidecar/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.868859 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.872538 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/prometheus/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.012983 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.211061 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.229599 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.230724 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.446908 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.496792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.505726 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.666745 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.689680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.782250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/setup-container/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.047606 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/setup-container/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.064592 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/rabbitmq/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.143515 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr_c7a96416-0a9e-44f5-9200-755a99d4c38e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.263187 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8lfnc_ba2cd97a-cec6-45bc-a08c-b179dc0f72d6/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.339176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5_888c24c8-ed9b-4434-b55c-d9f89ba3f0eb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.457407 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jct96_18f01021-e95a-43e8-a660-1a2c9cb9d8c5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.573584 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f68sz_63f139bc-490d-48b7-98c1-e29c8f583d90/ssh-known-hosts-edpm-deployment/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.753511 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-668fffc447-mjpk7_ac478be7-1c16-4a7f-a2d2-618cfe76c3d3/proxy-server/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.765361 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-668fffc447-mjpk7_ac478be7-1c16-4a7f-a2d2-618cfe76c3d3/proxy-httpd/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.854331 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cfjpx_ab89b7a2-642d-4a99-9eb4-f01b2990e75d/swift-ring-rebalance/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.993808 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-auditor/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.996321 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-reaper/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.081434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.088872 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-auditor/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.095325 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.222534 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.237889 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.287255 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-auditor/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.304742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-updater/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.327012 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-expirer/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.424836 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.430401 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.472252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-updater/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.513919 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/rsync/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.534611 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/swift-recon-cron/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.662060 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x_aa0510e7-f2a3-4466-b797-dab2e7ec0218/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.747305 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8_76808ec1-db9d-494f-9d72-88b2bc28befb/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.967513 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fafc1147-dd3a-429c-ae6f-48865401c68b/test-operator-logs-container/0.log" Feb 18 15:22:00 crc kubenswrapper[4739]: I0218 15:22:00.076172 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh_884f40e4-492b-4f73-94a7-8be81bde150e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:22:00 crc kubenswrapper[4739]: I0218 15:22:00.129845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2d70fa76-2eec-4ca5-abd7-44a082625a40/tempest-tests-tempest-tests-runner/0.log" Feb 18 15:22:08 crc kubenswrapper[4739]: I0218 15:22:08.420222 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:08 crc kubenswrapper[4739]: E0218 15:22:08.421248 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:22 crc kubenswrapper[4739]: I0218 15:22:22.413777 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:22 crc kubenswrapper[4739]: E0218 15:22:22.416805 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.326681 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.514491 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.539187 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.578136 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.767609 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/extract/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.793237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.812793 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.372097 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-47445_c8f419fe-23b1-4a93-97fe-05071df32425/manager/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.740557 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hxdbh_19470a60-c796-4a28-a0e2-65b50fa94ea6/manager/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.996423 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-m469j_60bad312-a989-43d1-87e6-6c6f10d1ae8f/manager/0.log" Feb 18 15:22:32 crc kubenswrapper[4739]: I0218 15:22:32.235436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-xhkdh_877f7fe3-168f-4b05-a88e-a7a11bf45e36/manager/0.log" Feb 18 15:22:32 crc kubenswrapper[4739]: I0218 15:22:32.825786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-hrxn2_fb608395-17b5-4b92-a0be-b5abc08ac979/manager/1.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.051771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-hrxn2_fb608395-17b5-4b92-a0be-b5abc08ac979/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.101306 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-54k4b_b1d0315e-6ccb-4c6a-a488-98454bb41358/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.411722 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:33 crc kubenswrapper[4739]: E0218 15:22:33.412168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.519495 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-q4vb2_2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.795237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-prt26_209f2e6c-29e9-444b-b14a-10eadb782a59/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.083529 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-b9hds_d617f67f-2577-418f-a367-42c366c17980/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.422807 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8vh65_92f1b9c3-1bdd-48ca-9a76-68ace2635cf1/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.494940 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-cdt9l_3b114d0a-837c-4f0c-b02a-db694bdab362/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.832660 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-rk7x9_40be8fff-51f0-467a-aca5-517e02eea23b/manager/0.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.144516 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl_52927612-b074-4573-aa63-41cbb1d704bf/manager/1.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.366208 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl_52927612-b074-4573-aa63-41cbb1d704bf/manager/0.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.837674 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5864f6ff6b-7n5hc_8bf4ed0a-8055-462b-9324-1fa1c4f429b1/operator/0.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.239990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnhvq_07815587-810f-4c17-a671-8c613b3755d6/registry-server/1.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.375276 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4f4zc_d34f7233-92b8-4803-ab81-0da45a4de925/manager/1.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.455792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnhvq_07815587-810f-4c17-a671-8c613b3755d6/registry-server/0.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.806500 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-4lkbs_8336a5f7-2ff0-440a-88b0-a6ab51692965/manager/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.361977 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-lmvdv_e19083b1-791a-4549-b64e-0bb0032abad2/manager/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.593148 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7gszz_06163b75-4f40-42a0-83d8-70c935b9172c/operator/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.881120 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s7fsm_ac911184-3930-4f7e-9d77-2cc9e7262ea6/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.464296 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4f4zc_d34f7233-92b8-4803-ab81-0da45a4de925/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.544733 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jblfh_6741b4b4-1817-4639-bdf6-b5be2729a1fa/manager/1.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.693248 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7954588dd9-trg52_8add2ed9-6416-4e9f-a3a1-f8a615962850/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.721254 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6956d67c5c-52bt7_538f0d59-9eea-4f76-a310-f7f724593a1e/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.721972 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jblfh_6741b4b4-1817-4639-bdf6-b5be2729a1fa/manager/0.log" Feb 18 15:22:39 crc kubenswrapper[4739]: I0218 15:22:39.470516 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-kssdd_caed7b7d-66db-4bd9-ba33-efc5f3951069/manager/0.log" Feb 18 15:22:45 crc kubenswrapper[4739]: I0218 15:22:45.487991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-knpz9_61bc4b17-baf6-435c-9280-b97fcede913c/manager/0.log" Feb 18 15:22:47 crc kubenswrapper[4739]: I0218 15:22:47.411890 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:47 crc kubenswrapper[4739]: E0218 15:22:47.412899 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:02 crc kubenswrapper[4739]: I0218 15:23:02.411089 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:02 crc kubenswrapper[4739]: E0218 15:23:02.411895 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.545786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pgswj_ffd4b935-0435-4a73-a7cd-596856c63f84/control-plane-machine-set-operator/0.log" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.882271 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sqm9s_d41d7405-9b25-414a-a247-1d945df68f89/kube-rbac-proxy/0.log" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.941076 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sqm9s_d41d7405-9b25-414a-a247-1d945df68f89/machine-api-operator/0.log" Feb 18 15:23:13 crc kubenswrapper[4739]: I0218 15:23:13.411088 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:13 crc kubenswrapper[4739]: E0218 15:23:13.411896 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.623214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bfgbz_4a1588a0-096b-4e77-b251-f034a57c7a04/cert-manager-controller/0.log" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.846277 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xl5rj_09228bff-e02a-4a38-86ab-3d18492c3fa1/cert-manager-cainjector/0.log" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.953694 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-927qr_c9731232-5945-414d-bf7c-cd9207130675/cert-manager-webhook/0.log" Feb 18 15:23:27 crc kubenswrapper[4739]: I0218 15:23:27.419193 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:27 crc kubenswrapper[4739]: E0218 15:23:27.420806 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.177833 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-c8h9g_292e9bf2-9674-423f-9ba5-4e83ff259a06/nmstate-console-plugin/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.396900 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4l8z8_3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b/kube-rbac-proxy/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.414247 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xwm5v_547a8c99-05a3-45bf-9e45-785d6cdb8fb5/nmstate-handler/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.585899 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4l8z8_3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b/nmstate-metrics/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.598428 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-77rqb_2f5c1234-49df-4f31-842f-cdaf04adff3c/nmstate-operator/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.753312 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-wtz97_ff0bf868-48fc-48a7-845d-3286c1dd16f0/nmstate-webhook/0.log" Feb 18 15:23:40 crc kubenswrapper[4739]: I0218 15:23:40.410936 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:40 crc kubenswrapper[4739]: E0218 15:23:40.411648 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:46 crc kubenswrapper[4739]: I0218 15:23:46.814786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/kube-rbac-proxy/0.log" Feb 18 15:23:46 crc kubenswrapper[4739]: I0218 15:23:46.852687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/1.log" Feb 18 15:23:47 crc kubenswrapper[4739]: I0218 15:23:47.485788 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/0.log" Feb 18 15:23:53 crc kubenswrapper[4739]: I0218 15:23:53.410667 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:53 crc kubenswrapper[4739]: E0218 15:23:53.411391 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.700928 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-c9tcc_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc/prometheus-operator/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.807182 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_3d337f75-bb26-461d-9519-f17c333cfc55/prometheus-operator-admission-webhook/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.950838 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_e257eada-747c-4c16-ade0-64120ce08e5b/prometheus-operator-admission-webhook/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.990537 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/1.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.122066 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/0.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.157927 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m5hn7_7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b/observability-ui-dashboards/0.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.298856 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-lpf5k_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe/perses-operator/0.log" Feb 18 15:24:05 crc kubenswrapper[4739]: I0218 15:24:05.411578 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:05 crc kubenswrapper[4739]: E0218 15:24:05.412296 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:14 crc kubenswrapper[4739]: I0218 15:24:14.752436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-54nln_4b0da132-982d-47b8-ae8a-d0529fbfe6a4/cluster-logging-operator/0.log" Feb 18 15:24:14 crc kubenswrapper[4739]: I0218 15:24:14.961687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ptdrt_3d3df5da-d291-44d1-890f-4f094d9e8301/collector/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.020769 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8cadd086-3e21-4dfc-9577-356fdcfe83c1/loki-compactor/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.144600 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-68g9x_d2537052-1467-4892-afe4-cafbbdfbd645/loki-distributor/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.275751 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-nd7jd_717b73b9-8190-41ce-8513-eb314a37cdfd/gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.292669 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-nd7jd_717b73b9-8190-41ce-8513-eb314a37cdfd/opa/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.435938 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-whgjq_82d2d64c-4971-48ee-a75c-30adadf054de/gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.470677 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-whgjq_82d2d64c-4971-48ee-a75c-30adadf054de/opa/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.591417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_d13e1961-45de-4db2-a4cb-04c91c7b18ad/loki-index-gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.756310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_bfabc0be-78aa-4cf2-ae16-6d226b95be03/loki-ingester/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.788037 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-ccsmg_3886312a-0449-43cc-b914-a4633b2c7e80/loki-querier/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.937214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-grbnx_f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b/loki-query-frontend/0.log" Feb 18 15:24:17 crc kubenswrapper[4739]: I0218 15:24:17.410343 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:17 crc kubenswrapper[4739]: E0218 15:24:17.410986 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:29 crc kubenswrapper[4739]: I0218 15:24:29.410989 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:29 crc kubenswrapper[4739]: E0218 15:24:29.411876 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:31 crc kubenswrapper[4739]: I0218 15:24:31.935836 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/controller/1.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.104340 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/controller/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.201473 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/kube-rbac-proxy/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.241310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.444163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.475207 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.498680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.546647 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.733939 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.774870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.825075 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.878845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.041176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.045253 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.060099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.116398 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/controller/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.268398 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr/1.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.277260 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.347877 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/kube-rbac-proxy/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.532536 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/kube-rbac-proxy-frr/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.590692 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/reloader/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.750792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-q8h4v_bf495248-0dde-4619-bce7-2cbbda1fd646/frr-k8s-webhook-server/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.878489 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b78699c88-r8kr2_d5023d08-507d-422f-b218-72057e18ef93/manager/1.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.032299 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b78699c88-r8kr2_d5023d08-507d-422f-b218-72057e18ef93/manager/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.103712 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86f6cb9d5d-8jd6g_0183ebc4-768c-4e08-8f1c-059fff8ba4e3/webhook-server/1.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.311487 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86f6cb9d5d-8jd6g_0183ebc4-768c-4e08-8f1c-059fff8ba4e3/webhook-server/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.493725 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/kube-rbac-proxy/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.666228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr/0.log" Feb 18 15:24:35 crc kubenswrapper[4739]: I0218 15:24:35.047282 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/speaker/1.log" Feb 18 15:24:35 crc kubenswrapper[4739]: I0218 15:24:35.247465 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/speaker/0.log" Feb 18 15:24:40 crc kubenswrapper[4739]: I0218 15:24:40.411093 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:40 crc kubenswrapper[4739]: E0218 15:24:40.413121 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.543157 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.712801 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.769868 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.804022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.989690 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.033954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.040746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/extract/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.172547 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.333162 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.394832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.401897 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.175754 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.183848 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/extract/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.207756 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.406809 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.868672 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.870791 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.874250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.090762 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.114293 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/extract/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.128222 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.291657 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.082070 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.087577 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.108364 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.299957 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.306099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.410823 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:53 crc kubenswrapper[4739]: E0218 15:24:53.411144 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.544648 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.736664 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.783795 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.849201 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.039648 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.086745 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.276705 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.456099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/registry-server/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.559389 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.604496 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.618831 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/registry-server/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.627089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.804566 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.812953 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.834566 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/extract/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.866068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.091020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.138655 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.150645 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.341476 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.351742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/extract/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.391354 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28vcn_0dc6acff-649a-4e95-ba42-ad79dae4a787/marketplace-operator/1.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.407905 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.541510 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28vcn_0dc6acff-649a-4e95-ba42-ad79dae4a787/marketplace-operator/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.600531 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.780657 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.798259 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.798361 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.978146 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.276912 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.335333 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.467185 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/registry-server/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.531496 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.558927 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.573387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.736077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.743531 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:57 crc kubenswrapper[4739]: I0218 15:24:57.199394 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/registry-server/0.log" Feb 18 15:25:06 crc kubenswrapper[4739]: I0218 15:25:06.411041 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:06 crc kubenswrapper[4739]: E0218 15:25:06.413428 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.329054 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_3d337f75-bb26-461d-9519-f17c333cfc55/prometheus-operator-admission-webhook/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.340228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_e257eada-747c-4c16-ade0-64120ce08e5b/prometheus-operator-admission-webhook/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.364930 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-c9tcc_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc/prometheus-operator/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.582504 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/1.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.617736 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m5hn7_7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b/observability-ui-dashboards/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.641965 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.680990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-lpf5k_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe/perses-operator/0.log" Feb 18 15:25:18 crc kubenswrapper[4739]: I0218 15:25:18.420917 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:18 crc kubenswrapper[4739]: E0218 15:25:18.421751 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.269130 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/kube-rbac-proxy/0.log" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.311962 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/1.log" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.365543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/0.log" Feb 18 15:25:31 crc kubenswrapper[4739]: I0218 15:25:31.410436 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:31 crc kubenswrapper[4739]: E0218 15:25:31.411314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:43 crc kubenswrapper[4739]: I0218 15:25:43.411380 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:43 crc kubenswrapper[4739]: E0218 15:25:43.412026 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:56 crc kubenswrapper[4739]: I0218 15:25:56.411042 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:56 crc kubenswrapper[4739]: E0218 15:25:56.411879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:07 crc kubenswrapper[4739]: I0218 15:26:07.410723 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:07 crc kubenswrapper[4739]: E0218 15:26:07.411552 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:22 crc kubenswrapper[4739]: I0218 15:26:22.410814 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:22 crc kubenswrapper[4739]: E0218 15:26:22.411881 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:33 crc kubenswrapper[4739]: I0218 15:26:33.411881 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:34 crc kubenswrapper[4739]: I0218 15:26:34.563714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.021026 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:24 crc kubenswrapper[4739]: E0218 15:27:24.022139 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.022154 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.022432 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.026668 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.048531 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165772 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.268696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.268790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.295187 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.348686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:25 crc kubenswrapper[4739]: I0218 15:27:25.455726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281488 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" exitCode=0 Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58"} Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"be7eb90f9f21d375f11e8bed13d64d9c69e389afd98358691bb486d4b1e02662"} Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.285706 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:27:27 crc kubenswrapper[4739]: I0218 15:27:27.295082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} Feb 18 15:27:29 crc kubenswrapper[4739]: I0218 15:27:29.325980 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" exitCode=0 Feb 18 15:27:29 crc kubenswrapper[4739]: I0218 15:27:29.326737 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} Feb 18 15:27:30 crc kubenswrapper[4739]: I0218 15:27:30.340049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} Feb 18 15:27:30 crc kubenswrapper[4739]: I0218 15:27:30.367124 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpngx" podStartSLOduration=3.849306431 podStartE2EDuration="7.367102638s" podCreationTimestamp="2026-02-18 15:27:23 +0000 UTC" firstStartedPulling="2026-02-18 15:27:26.284129413 +0000 UTC m=+5278.779850335" lastFinishedPulling="2026-02-18 15:27:29.80192562 +0000 UTC m=+5282.297646542" observedRunningTime="2026-02-18 15:27:30.35839706 +0000 UTC m=+5282.854118002" watchObservedRunningTime="2026-02-18 15:27:30.367102638 +0000 UTC m=+5282.862823570" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.349327 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.350172 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.401970 4739 generic.go:334] "Generic (PLEG): container finished" podID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" exitCode=0 Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.402026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerDied","Data":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.403099 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:35 crc kubenswrapper[4739]: I0218 15:27:35.341687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/gather/0.log" Feb 18 15:27:35 crc kubenswrapper[4739]: I0218 15:27:35.409187 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xpngx" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" probeResult="failure" output=< Feb 18 15:27:35 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:27:35 crc kubenswrapper[4739]: > Feb 18 15:27:39 crc kubenswrapper[4739]: I0218 15:27:39.397315 4739 scope.go:117] "RemoveContainer" containerID="7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.423429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.479898 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.667387 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:45 crc kubenswrapper[4739]: I0218 15:27:45.530119 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xpngx" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" containerID="cri-o://64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" gracePeriod=2 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.096839 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232402 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232828 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.235028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities" (OuterVolumeSpecName: "utilities") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.241613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn" (OuterVolumeSpecName: "kube-api-access-q2ztn") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "kube-api-access-q2ztn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.298714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338364 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338400 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338410 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541742 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" exitCode=0 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"be7eb90f9f21d375f11e8bed13d64d9c69e389afd98358691bb486d4b1e02662"} Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541837 4739 scope.go:117] "RemoveContainer" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541846 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.570534 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.572133 4739 scope.go:117] "RemoveContainer" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.583986 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.596076 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.596375 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-26llf/must-gather-vps8f" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" containerID="cri-o://18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" gracePeriod=2 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.598244 4739 scope.go:117] "RemoveContainer" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.607832 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.814419 4739 scope.go:117] "RemoveContainer" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.815359 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": container with ID starting with 64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2 not found: ID does not exist" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.815434 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} err="failed to get container status \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": rpc error: code = NotFound desc = could not find container \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": container with ID starting with 64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2 not found: ID does not exist" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.815519 4739 scope.go:117] "RemoveContainer" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.816547 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": container with ID starting with 570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677 not found: ID does not exist" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.816590 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} err="failed to get container status \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": rpc error: code = NotFound desc = could not find container \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": container with ID starting with 570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677 not found: ID does not exist" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.816633 4739 scope.go:117] "RemoveContainer" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.819851 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": container with ID starting with 28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58 not found: ID does not exist" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.819902 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58"} err="failed to get container status \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": rpc error: code = NotFound desc = could not find container \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": container with ID starting with 28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58 not found: ID does not exist" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.228588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/copy/0.log" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.229146 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.364914 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"205cb55b-f489-4c55-aa9e-13f9ff38def6\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.364975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"205cb55b-f489-4c55-aa9e-13f9ff38def6\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.384832 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z" (OuterVolumeSpecName: "kube-api-access-hkm4z") pod "205cb55b-f489-4c55-aa9e-13f9ff38def6" (UID: "205cb55b-f489-4c55-aa9e-13f9ff38def6"). InnerVolumeSpecName "kube-api-access-hkm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.471125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.535543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "205cb55b-f489-4c55-aa9e-13f9ff38def6" (UID: "205cb55b-f489-4c55-aa9e-13f9ff38def6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.553389 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/copy/0.log" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.554846 4739 generic.go:334] "Generic (PLEG): container finished" podID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" exitCode=143 Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.554932 4739 scope.go:117] "RemoveContainer" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.555795 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.574238 4739 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.579008 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.638579 4739 scope.go:117] "RemoveContainer" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: E0218 15:27:47.639173 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": container with ID starting with 18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276 not found: ID does not exist" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639208 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276"} err="failed to get container status \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": rpc error: code = NotFound desc = could not find container \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": container with ID starting with 18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276 not found: ID does not exist" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639233 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: E0218 15:27:47.639523 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": container with ID starting with b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a not found: ID does not exist" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639549 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} err="failed to get container status \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": rpc error: code = NotFound desc = could not find container \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": container with ID starting with b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a not found: ID does not exist" Feb 18 15:27:48 crc kubenswrapper[4739]: I0218 15:27:48.425815 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" path="/var/lib/kubelet/pods/1e938c96-4652-419c-97ce-90bb1d83768a/volumes" Feb 18 15:27:48 crc kubenswrapper[4739]: I0218 15:27:48.426957 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" path="/var/lib/kubelet/pods/205cb55b-f489-4c55-aa9e-13f9ff38def6/volumes" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.552486 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555101 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555262 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555341 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555400 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555488 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-content" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555563 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-content" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-utilities" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-utilities" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555773 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555830 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556190 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556306 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556376 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.558703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.567222 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.690416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.690643 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.712174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.886853 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:04 crc kubenswrapper[4739]: I0218 15:28:04.407849 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759011 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" exitCode=0 Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25"} Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"ea2cef7b03a77ba255750eb431c397b926fb1e4142bd2cb031d62aba0eddbe71"} Feb 18 15:28:07 crc kubenswrapper[4739]: I0218 15:28:07.788047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} Feb 18 15:28:08 crc kubenswrapper[4739]: I0218 15:28:08.801574 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" exitCode=0 Feb 18 15:28:08 crc kubenswrapper[4739]: I0218 15:28:08.801676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} Feb 18 15:28:09 crc kubenswrapper[4739]: I0218 15:28:09.828404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.887488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.888049 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.946375 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.968205 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxkb4" podStartSLOduration=7.440956607 podStartE2EDuration="10.96818788s" podCreationTimestamp="2026-02-18 15:28:03 +0000 UTC" firstStartedPulling="2026-02-18 15:28:05.76082594 +0000 UTC m=+5318.256546862" lastFinishedPulling="2026-02-18 15:28:09.288057223 +0000 UTC m=+5321.783778135" observedRunningTime="2026-02-18 15:28:09.85910422 +0000 UTC m=+5322.354825152" watchObservedRunningTime="2026-02-18 15:28:13.96818788 +0000 UTC m=+5326.463908812" Feb 18 15:28:14 crc kubenswrapper[4739]: I0218 15:28:14.938306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:15 crc kubenswrapper[4739]: I0218 15:28:15.003623 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:16 crc kubenswrapper[4739]: I0218 15:28:16.914436 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxkb4" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" containerID="cri-o://2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" gracePeriod=2 Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.484276 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.542670 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities" (OuterVolumeSpecName: "utilities") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.543165 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.550026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw" (OuterVolumeSpecName: "kube-api-access-vvbcw") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "kube-api-access-vvbcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.601706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.646164 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.646468 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927907 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" exitCode=0 Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"ea2cef7b03a77ba255750eb431c397b926fb1e4142bd2cb031d62aba0eddbe71"} Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927996 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.928016 4739 scope.go:117] "RemoveContainer" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.955569 4739 scope.go:117] "RemoveContainer" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.973903 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.986994 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.996792 4739 scope.go:117] "RemoveContainer" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034111 4739 scope.go:117] "RemoveContainer" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.034796 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": container with ID starting with 2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f not found: ID does not exist" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034846 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} err="failed to get container status \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": rpc error: code = NotFound desc = could not find container \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": container with ID starting with 2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034878 4739 scope.go:117] "RemoveContainer" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.035810 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": container with ID starting with fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6 not found: ID does not exist" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.035978 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} err="failed to get container status \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": rpc error: code = NotFound desc = could not find container \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": container with ID starting with fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6 not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.036106 4739 scope.go:117] "RemoveContainer" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.036926 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": container with ID starting with 15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25 not found: ID does not exist" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.036959 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25"} err="failed to get container status \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": rpc error: code = NotFound desc = could not find container \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": container with ID starting with 15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25 not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.425721 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" path="/var/lib/kubelet/pods/7a65b49b-3fa1-452b-8859-a16f38792f96/volumes" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.433361 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434303 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434316 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434366 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-content" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434372 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-content" Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434384 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-utilities" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434390 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-utilities" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434608 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.441003 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.458141 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529095 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529191 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.631839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.631954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632030 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.657462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.759737 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:41 crc kubenswrapper[4739]: I0218 15:28:41.349925 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203503 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" exitCode=0 Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb"} Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203841 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"45ee3a9d84e4d126fe757c663dd3a9c627c41af630679532b55671b085971fca"} Feb 18 15:28:43 crc kubenswrapper[4739]: I0218 15:28:43.228056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} Feb 18 15:28:44 crc kubenswrapper[4739]: I0218 15:28:44.240670 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" exitCode=0 Feb 18 15:28:44 crc kubenswrapper[4739]: I0218 15:28:44.240739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} Feb 18 15:28:45 crc kubenswrapper[4739]: I0218 15:28:45.254811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} Feb 18 15:28:45 crc kubenswrapper[4739]: I0218 15:28:45.275737 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lz2sx" podStartSLOduration=2.758651493 podStartE2EDuration="5.275711548s" podCreationTimestamp="2026-02-18 15:28:40 +0000 UTC" firstStartedPulling="2026-02-18 15:28:42.206494121 +0000 UTC m=+5354.702215043" lastFinishedPulling="2026-02-18 15:28:44.723554176 +0000 UTC m=+5357.219275098" observedRunningTime="2026-02-18 15:28:45.271861261 +0000 UTC m=+5357.767582203" watchObservedRunningTime="2026-02-18 15:28:45.275711548 +0000 UTC m=+5357.771432480" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.023710 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.030753 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.047770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.166801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.166879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.167030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.271053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.302331 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.352071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.932097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279562 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" exitCode=0 Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe"} Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"4b9eb25a03d864c06ee955eb56ba0fc1dba7e630ebfafd06afdd34f5de8380c2"} Feb 18 15:28:48 crc kubenswrapper[4739]: I0218 15:28:48.293996 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.760710 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.761738 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.820350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:51 crc kubenswrapper[4739]: I0218 15:28:51.383301 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:52 crc kubenswrapper[4739]: I0218 15:28:52.004706 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.356669 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" exitCode=0 Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.356754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.357259 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lz2sx" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" containerID="cri-o://484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" gracePeriod=2 Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.975058 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068676 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068952 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068990 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.069321 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities" (OuterVolumeSpecName: "utilities") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.069756 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.075602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d" (OuterVolumeSpecName: "kube-api-access-8lp2d") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "kube-api-access-8lp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.094104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.172510 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.172560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.371544 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" exitCode=0 Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.371614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372614 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"45ee3a9d84e4d126fe757c663dd3a9c627c41af630679532b55671b085971fca"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372648 4739 scope.go:117] "RemoveContainer" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.375125 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.410308 4739 scope.go:117] "RemoveContainer" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.434781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkfnv" podStartSLOduration=2.95510517 podStartE2EDuration="9.43475232s" podCreationTimestamp="2026-02-18 15:28:45 +0000 UTC" firstStartedPulling="2026-02-18 15:28:47.281621531 +0000 UTC m=+5359.777342453" lastFinishedPulling="2026-02-18 15:28:53.761268681 +0000 UTC m=+5366.256989603" observedRunningTime="2026-02-18 15:28:54.412181384 +0000 UTC m=+5366.907902336" watchObservedRunningTime="2026-02-18 15:28:54.43475232 +0000 UTC m=+5366.930473262" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.442104 4739 scope.go:117] "RemoveContainer" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.459072 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.460694 4739 scope.go:117] "RemoveContainer" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.461115 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": container with ID starting with 484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5 not found: ID does not exist" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461171 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} err="failed to get container status \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": rpc error: code = NotFound desc = could not find container \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": container with ID starting with 484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5 not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461202 4739 scope.go:117] "RemoveContainer" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.461650 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": container with ID starting with 085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69 not found: ID does not exist" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461749 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} err="failed to get container status \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": rpc error: code = NotFound desc = could not find container \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": container with ID starting with 085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69 not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461816 4739 scope.go:117] "RemoveContainer" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.462203 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": container with ID starting with f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb not found: ID does not exist" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.462309 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb"} err="failed to get container status \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": rpc error: code = NotFound desc = could not find container \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": container with ID starting with f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.478506 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.352984 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.353588 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.437392 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" path="/var/lib/kubelet/pods/23e26372-bcdf-4d10-ae5e-ae94c5a09f96/volumes" Feb 18 15:28:57 crc kubenswrapper[4739]: I0218 15:28:57.411172 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:28:57 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:28:57 crc kubenswrapper[4739]: > Feb 18 15:28:59 crc kubenswrapper[4739]: I0218 15:28:59.372368 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:28:59 crc kubenswrapper[4739]: I0218 15:28:59.373099 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:07 crc kubenswrapper[4739]: I0218 15:29:07.407506 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:07 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:07 crc kubenswrapper[4739]: > Feb 18 15:29:17 crc kubenswrapper[4739]: I0218 15:29:17.862702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:17 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:17 crc kubenswrapper[4739]: > Feb 18 15:29:27 crc kubenswrapper[4739]: I0218 15:29:27.732836 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:27 crc kubenswrapper[4739]: > Feb 18 15:29:29 crc kubenswrapper[4739]: I0218 15:29:29.373363 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:29:29 crc kubenswrapper[4739]: I0218 15:29:29.373760 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.423869 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.478637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.662431 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:37 crc kubenswrapper[4739]: I0218 15:29:37.857724 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" containerID="cri-o://bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" gracePeriod=2 Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.398631 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504895 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.505945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities" (OuterVolumeSpecName: "utilities") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.514630 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc" (OuterVolumeSpecName: "kube-api-access-bvfdc") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "kube-api-access-bvfdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.608389 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.608424 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.628041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.712144 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.871986 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" exitCode=0 Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"4b9eb25a03d864c06ee955eb56ba0fc1dba7e630ebfafd06afdd34f5de8380c2"} Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872085 4739 scope.go:117] "RemoveContainer" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872275 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.915712 4739 scope.go:117] "RemoveContainer" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.923321 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.933226 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.936260 4739 scope.go:117] "RemoveContainer" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009041 4739 scope.go:117] "RemoveContainer" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.009793 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": container with ID starting with bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45 not found: ID does not exist" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009836 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} err="failed to get container status \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": rpc error: code = NotFound desc = could not find container \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": container with ID starting with bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45 not found: ID does not exist" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009862 4739 scope.go:117] "RemoveContainer" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.010313 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": container with ID starting with 31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e not found: ID does not exist" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010340 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} err="failed to get container status \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": rpc error: code = NotFound desc = could not find container \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": container with ID starting with 31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e not found: ID does not exist" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010355 4739 scope.go:117] "RemoveContainer" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.010683 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": container with ID starting with 7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe not found: ID does not exist" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010746 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe"} err="failed to get container status \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": rpc error: code = NotFound desc = could not find container \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": container with ID starting with 7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe not found: ID does not exist" Feb 18 15:29:40 crc kubenswrapper[4739]: I0218 15:29:40.423167 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" path="/var/lib/kubelet/pods/1ca25b9b-aaec-4d87-aa25-9c003455730c/volumes" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.372842 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.375216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.375299 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.376209 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.376270 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" gracePeriod=600 Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116113 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" exitCode=0 Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116980 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.204688 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205391 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205463 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205474 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205538 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205602 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205613 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205644 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205655 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.206001 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.206058 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.207230 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.210361 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.211286 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.219397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.446438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.454363 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.465328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.527009 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:01 crc kubenswrapper[4739]: W0218 15:30:01.041510 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d5496ac_13c6_454a_8f4f_c5d40c7bf53f.slice/crio-fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d WatchSource:0}: Error finding container fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d: Status 404 returned error can't find the container with id fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d Feb 18 15:30:01 crc kubenswrapper[4739]: I0218 15:30:01.048476 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:01 crc kubenswrapper[4739]: I0218 15:30:01.146719 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerStarted","Data":"fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d"} Feb 18 15:30:02 crc kubenswrapper[4739]: I0218 15:30:02.161796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerStarted","Data":"de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a"} Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.176049 4739 generic.go:334] "Generic (PLEG): container finished" podID="5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" containerID="de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a" exitCode=0 Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.176114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerDied","Data":"de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a"} Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.596118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753673 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753840 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.756094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.762686 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6" (OuterVolumeSpecName: "kube-api-access-hl9l6") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "kube-api-access-hl9l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.763044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857643 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857683 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857694 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerDied","Data":"fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d"} Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189745 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189811 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.681939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.694716 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 15:30:06 crc kubenswrapper[4739]: I0218 15:30:06.423263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" path="/var/lib/kubelet/pods/d759be05-a3d9-4dd0-b360-dc1f752b84be/volumes" Feb 18 15:30:39 crc kubenswrapper[4739]: I0218 15:30:39.620282 4739 scope.go:117] "RemoveContainer" containerID="aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48" Feb 18 15:31:59 crc kubenswrapper[4739]: I0218 15:31:59.372905 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:31:59 crc kubenswrapper[4739]: I0218 15:31:59.373511 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:29 crc kubenswrapper[4739]: I0218 15:32:29.372649 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:32:29 crc kubenswrapper[4739]: I0218 15:32:29.373073 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.373250 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.373943 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.374008 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.375052 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.375121 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" gracePeriod=600 Feb 18 15:32:59 crc kubenswrapper[4739]: E0218 15:32:59.533514 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263089 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" exitCode=0 Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263208 4739 scope.go:117] "RemoveContainer" containerID="fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.264078 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:00 crc kubenswrapper[4739]: E0218 15:33:00.264567 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:14 crc kubenswrapper[4739]: I0218 15:33:14.410628 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:14 crc kubenswrapper[4739]: E0218 15:33:14.411437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:29 crc kubenswrapper[4739]: I0218 15:33:29.410288 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:29 crc kubenswrapper[4739]: E0218 15:33:29.411406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:41 crc kubenswrapper[4739]: I0218 15:33:41.411136 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:41 crc kubenswrapper[4739]: E0218 15:33:41.412124 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:53 crc kubenswrapper[4739]: I0218 15:33:53.411778 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:53 crc kubenswrapper[4739]: E0218 15:33:53.412841 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:07 crc kubenswrapper[4739]: I0218 15:34:07.411347 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:07 crc kubenswrapper[4739]: E0218 15:34:07.412327 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:20 crc kubenswrapper[4739]: I0218 15:34:20.412025 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:20 crc kubenswrapper[4739]: E0218 15:34:20.413200 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:32 crc kubenswrapper[4739]: I0218 15:34:32.411976 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:32 crc kubenswrapper[4739]: E0218 15:34:32.413184 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:45 crc kubenswrapper[4739]: I0218 15:34:45.411547 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:45 crc kubenswrapper[4739]: E0218 15:34:45.412533 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:59 crc kubenswrapper[4739]: I0218 15:34:59.411070 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:59 crc kubenswrapper[4739]: E0218 15:34:59.412206 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:35:11 crc kubenswrapper[4739]: I0218 15:35:11.410974 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:35:11 crc kubenswrapper[4739]: E0218 15:35:11.411818 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145356075024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145356076017376 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145342344016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145342344015462 5ustar corecore